00:00:00.001 Started by upstream project "autotest-per-patch" build number 132560 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.026 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.027 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.028 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.050 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.076 Using shallow fetch with depth 1 00:00:00.076 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.076 > git --version # timeout=10 00:00:00.107 > git --version # 'git version 2.39.2' 00:00:00.107 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.136 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.136 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.626 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.641 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.658 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.658 > git config core.sparsecheckout # timeout=10 00:00:02.672 > git read-tree -mu HEAD # timeout=10 00:00:02.693 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.718 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.718 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.827 [Pipeline] Start of Pipeline 00:00:02.846 [Pipeline] library 00:00:02.848 Loading library shm_lib@master 00:00:02.849 Library shm_lib@master is cached. Copying from home. 00:00:02.863 [Pipeline] node 00:00:02.870 Running on VM-host-SM0 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.872 [Pipeline] { 00:00:02.881 [Pipeline] catchError 00:00:02.883 [Pipeline] { 00:00:02.898 [Pipeline] wrap 00:00:02.905 [Pipeline] { 00:00:02.914 [Pipeline] stage 00:00:02.916 [Pipeline] { (Prologue) 00:00:02.930 [Pipeline] echo 00:00:02.931 Node: VM-host-SM0 00:00:02.935 [Pipeline] cleanWs 00:00:02.942 [WS-CLEANUP] Deleting project workspace... 00:00:02.942 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.947 [WS-CLEANUP] done 00:00:03.124 [Pipeline] setCustomBuildProperty 00:00:03.202 [Pipeline] httpRequest 00:00:03.540 [Pipeline] echo 00:00:03.542 Sorcerer 10.211.164.20 is alive 00:00:03.551 [Pipeline] retry 00:00:03.553 [Pipeline] { 00:00:03.567 [Pipeline] httpRequest 00:00:03.571 HttpMethod: GET 00:00:03.572 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.572 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.580 Response Code: HTTP/1.1 200 OK 00:00:03.580 Success: Status code 200 is in the accepted range: 200,404 00:00:03.581 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.308 [Pipeline] } 00:00:14.326 [Pipeline] // retry 00:00:14.332 [Pipeline] sh 00:00:14.658 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.674 [Pipeline] httpRequest 00:00:15.088 [Pipeline] echo 00:00:15.089 Sorcerer 10.211.164.20 is alive 00:00:15.098 [Pipeline] retry 00:00:15.100 [Pipeline] { 00:00:15.113 [Pipeline] httpRequest 00:00:15.117 HttpMethod: GET 00:00:15.118 URL: http://10.211.164.20/packages/spdk_a640d9f989075bd552c09c63a9eac6f5cc769887.tar.gz 00:00:15.118 Sending request to url: http://10.211.164.20/packages/spdk_a640d9f989075bd552c09c63a9eac6f5cc769887.tar.gz 00:00:15.123 Response Code: HTTP/1.1 200 OK 00:00:15.124 Success: Status code 200 is in the accepted range: 200,404 00:00:15.124 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_a640d9f989075bd552c09c63a9eac6f5cc769887.tar.gz 00:04:53.428 [Pipeline] } 00:04:53.445 [Pipeline] // retry 00:04:53.453 [Pipeline] sh 00:04:53.730 + tar --no-same-owner -xf spdk_a640d9f989075bd552c09c63a9eac6f5cc769887.tar.gz 00:04:57.103 [Pipeline] sh 00:04:57.382 + git -C spdk log --oneline -n5 00:04:57.382 a640d9f98 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:04:57.382 ae1917872 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:04:57.382 ff68c6e68 nvmf: Expose DIF type of namespace to host again 00:04:57.382 dd10a9655 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:04:57.382 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:04:57.400 [Pipeline] writeFile 00:04:57.415 [Pipeline] sh 00:04:57.696 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:57.707 [Pipeline] sh 00:04:57.986 + cat autorun-spdk.conf 00:04:57.986 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:57.986 SPDK_RUN_ASAN=1 00:04:57.987 SPDK_RUN_UBSAN=1 00:04:57.987 SPDK_TEST_RAID=1 00:04:57.987 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:57.993 RUN_NIGHTLY=0 00:04:57.994 [Pipeline] } 00:04:58.006 [Pipeline] // stage 00:04:58.021 [Pipeline] stage 00:04:58.024 [Pipeline] { (Run VM) 00:04:58.039 [Pipeline] sh 00:04:58.317 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:58.317 + echo 'Start stage prepare_nvme.sh' 00:04:58.317 Start stage prepare_nvme.sh 00:04:58.317 + [[ -n 1 ]] 00:04:58.317 + disk_prefix=ex1 00:04:58.317 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:04:58.317 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:04:58.317 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:04:58.317 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:58.317 ++ SPDK_RUN_ASAN=1 00:04:58.317 ++ SPDK_RUN_UBSAN=1 00:04:58.317 ++ SPDK_TEST_RAID=1 00:04:58.317 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:58.317 ++ RUN_NIGHTLY=0 00:04:58.317 + cd /var/jenkins/workspace/raid-vg-autotest 00:04:58.317 + nvme_files=() 00:04:58.317 + declare -A nvme_files 00:04:58.317 + backend_dir=/var/lib/libvirt/images/backends 00:04:58.317 + nvme_files['nvme.img']=5G 00:04:58.317 + nvme_files['nvme-cmb.img']=5G 00:04:58.317 + nvme_files['nvme-multi0.img']=4G 00:04:58.317 + nvme_files['nvme-multi1.img']=4G 00:04:58.317 + nvme_files['nvme-multi2.img']=4G 00:04:58.317 + nvme_files['nvme-openstack.img']=8G 00:04:58.317 + nvme_files['nvme-zns.img']=5G 00:04:58.317 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:58.317 + (( SPDK_TEST_FTL == 1 )) 00:04:58.317 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:58.317 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:58.317 + for nvme in "${!nvme_files[@]}" 00:04:58.317 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:04:58.317 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:58.317 + for nvme in "${!nvme_files[@]}" 00:04:58.317 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:04:58.318 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:58.318 + for nvme in "${!nvme_files[@]}" 00:04:58.318 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:04:58.318 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:58.318 + for nvme in "${!nvme_files[@]}" 00:04:58.318 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:04:58.318 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:58.318 + for nvme in "${!nvme_files[@]}" 00:04:58.318 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:04:58.318 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:58.318 + for nvme in "${!nvme_files[@]}" 00:04:58.318 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:04:58.318 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:58.318 + for nvme in "${!nvme_files[@]}" 00:04:58.318 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:04:58.602 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:58.602 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:04:58.602 + echo 'End stage prepare_nvme.sh' 00:04:58.602 End stage prepare_nvme.sh 00:04:58.633 [Pipeline] sh 00:04:58.913 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:58.913 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:04:58.913 00:04:58.913 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:04:58.913 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:04:58.913 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:04:58.913 HELP=0 00:04:58.913 DRY_RUN=0 00:04:58.913 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:04:58.913 NVME_DISKS_TYPE=nvme,nvme, 00:04:58.913 NVME_AUTO_CREATE=0 00:04:58.913 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:04:58.913 NVME_CMB=,, 00:04:58.913 NVME_PMR=,, 00:04:58.913 NVME_ZNS=,, 00:04:58.913 NVME_MS=,, 00:04:58.913 NVME_FDP=,, 00:04:58.913 SPDK_VAGRANT_DISTRO=fedora39 00:04:58.913 SPDK_VAGRANT_VMCPU=10 00:04:58.913 SPDK_VAGRANT_VMRAM=12288 00:04:58.913 SPDK_VAGRANT_PROVIDER=libvirt 00:04:58.913 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:58.913 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:58.913 SPDK_OPENSTACK_NETWORK=0 00:04:58.913 VAGRANT_PACKAGE_BOX=0 00:04:58.913 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:58.913 FORCE_DISTRO=true 00:04:58.913 VAGRANT_BOX_VERSION= 00:04:58.913 EXTRA_VAGRANTFILES= 00:04:58.913 NIC_MODEL=e1000 00:04:58.913 00:04:58.913 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:04:58.914 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:05:02.200 Bringing machine 'default' up with 'libvirt' provider... 00:05:03.135 ==> default: Creating image (snapshot of base box volume). 00:05:03.135 ==> default: Creating domain with the following settings... 00:05:03.135 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732681610_0ff65430a5eb1990b84e 00:05:03.135 ==> default: -- Domain type: kvm 00:05:03.135 ==> default: -- Cpus: 10 00:05:03.135 ==> default: -- Feature: acpi 00:05:03.135 ==> default: -- Feature: apic 00:05:03.135 ==> default: -- Feature: pae 00:05:03.135 ==> default: -- Memory: 12288M 00:05:03.135 ==> default: -- Memory Backing: hugepages: 00:05:03.135 ==> default: -- Management MAC: 00:05:03.135 ==> default: -- Loader: 00:05:03.135 ==> default: -- Nvram: 00:05:03.135 ==> default: -- Base box: spdk/fedora39 00:05:03.135 ==> default: -- Storage pool: default 00:05:03.135 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732681610_0ff65430a5eb1990b84e.img (20G) 00:05:03.135 ==> default: -- Volume Cache: default 00:05:03.135 ==> default: -- Kernel: 00:05:03.135 ==> default: -- Initrd: 00:05:03.135 ==> default: -- Graphics Type: vnc 00:05:03.135 ==> default: -- Graphics Port: -1 00:05:03.135 ==> default: -- Graphics IP: 127.0.0.1 00:05:03.135 ==> default: -- Graphics Password: Not defined 00:05:03.135 ==> default: -- Video Type: cirrus 00:05:03.135 ==> default: -- Video VRAM: 9216 00:05:03.135 ==> default: -- Sound Type: 00:05:03.135 ==> default: -- Keymap: en-us 00:05:03.135 ==> default: -- TPM Path: 00:05:03.135 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:03.135 ==> default: -- Command line args: 00:05:03.135 ==> default: -> value=-device, 00:05:03.135 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:03.135 ==> default: -> value=-drive, 00:05:03.135 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:05:03.135 ==> default: -> value=-device, 00:05:03.135 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:03.135 ==> default: -> value=-device, 00:05:03.136 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:03.136 ==> default: -> value=-drive, 00:05:03.136 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:03.136 ==> default: -> value=-device, 00:05:03.136 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:03.136 ==> default: -> value=-drive, 00:05:03.136 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:03.136 ==> default: -> value=-device, 00:05:03.136 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:03.136 ==> default: -> value=-drive, 00:05:03.136 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:03.136 ==> default: -> value=-device, 00:05:03.136 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:03.396 ==> default: Creating shared folders metadata... 00:05:03.396 ==> default: Starting domain. 00:05:05.299 ==> default: Waiting for domain to get an IP address... 00:05:23.377 ==> default: Waiting for SSH to become available... 00:05:23.377 ==> default: Configuring and enabling network interfaces... 00:05:27.582 default: SSH address: 192.168.121.181:22 00:05:27.582 default: SSH username: vagrant 00:05:27.582 default: SSH auth method: private key 00:05:29.486 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:37.746 ==> default: Mounting SSHFS shared folder... 00:05:39.123 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:39.123 ==> default: Checking Mount.. 00:05:40.500 ==> default: Folder Successfully Mounted! 00:05:40.500 ==> default: Running provisioner: file... 00:05:41.067 default: ~/.gitconfig => .gitconfig 00:05:41.634 00:05:41.634 SUCCESS! 00:05:41.634 00:05:41.634 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:05:41.634 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:41.634 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:05:41.634 00:05:41.661 [Pipeline] } 00:05:41.713 [Pipeline] // stage 00:05:41.719 [Pipeline] dir 00:05:41.719 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:05:41.720 [Pipeline] { 00:05:41.728 [Pipeline] catchError 00:05:41.729 [Pipeline] { 00:05:41.736 [Pipeline] sh 00:05:42.009 + vagrant ssh-config --host vagrant 00:05:42.009 + sed -ne /^Host/,$p 00:05:42.009 + tee ssh_conf 00:05:46.215 Host vagrant 00:05:46.215 HostName 192.168.121.181 00:05:46.215 User vagrant 00:05:46.215 Port 22 00:05:46.215 UserKnownHostsFile /dev/null 00:05:46.215 StrictHostKeyChecking no 00:05:46.215 PasswordAuthentication no 00:05:46.215 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:46.215 IdentitiesOnly yes 00:05:46.216 LogLevel FATAL 00:05:46.216 ForwardAgent yes 00:05:46.216 ForwardX11 yes 00:05:46.216 00:05:46.226 [Pipeline] withEnv 00:05:46.228 [Pipeline] { 00:05:46.239 [Pipeline] sh 00:05:46.516 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:46.516 source /etc/os-release 00:05:46.516 [[ -e /image.version ]] && img=$(< /image.version) 00:05:46.516 # Minimal, systemd-like check. 00:05:46.516 if [[ -e /.dockerenv ]]; then 00:05:46.516 # Clear garbage from the node's name: 00:05:46.516 # agt-er_autotest_547-896 -> autotest_547-896 00:05:46.516 # $HOSTNAME is the actual container id 00:05:46.516 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:46.516 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:46.516 # We can assume this is a mount from a host where container is running, 00:05:46.516 # so fetch its hostname to easily identify the target swarm worker. 00:05:46.516 container="$(< /etc/hostname) ($agent)" 00:05:46.516 else 00:05:46.516 # Fallback 00:05:46.516 container=$agent 00:05:46.516 fi 00:05:46.516 fi 00:05:46.516 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:46.516 00:05:46.787 [Pipeline] } 00:05:46.807 [Pipeline] // withEnv 00:05:46.821 [Pipeline] setCustomBuildProperty 00:05:46.839 [Pipeline] stage 00:05:46.842 [Pipeline] { (Tests) 00:05:46.859 [Pipeline] sh 00:05:47.141 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:47.414 [Pipeline] sh 00:05:47.695 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:47.969 [Pipeline] timeout 00:05:47.970 Timeout set to expire in 1 hr 30 min 00:05:47.972 [Pipeline] { 00:05:47.986 [Pipeline] sh 00:05:48.266 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:48.834 HEAD is now at a640d9f98 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:05:48.848 [Pipeline] sh 00:05:49.127 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:49.401 [Pipeline] sh 00:05:49.682 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:49.958 [Pipeline] sh 00:05:50.277 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:05:50.277 ++ readlink -f spdk_repo 00:05:50.277 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:50.277 + [[ -n /home/vagrant/spdk_repo ]] 00:05:50.277 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:50.277 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:50.277 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:50.277 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:50.278 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:50.278 + [[ raid-vg-autotest == pkgdep-* ]] 00:05:50.278 + cd /home/vagrant/spdk_repo 00:05:50.278 + source /etc/os-release 00:05:50.278 ++ NAME='Fedora Linux' 00:05:50.278 ++ VERSION='39 (Cloud Edition)' 00:05:50.278 ++ ID=fedora 00:05:50.278 ++ VERSION_ID=39 00:05:50.278 ++ VERSION_CODENAME= 00:05:50.278 ++ PLATFORM_ID=platform:f39 00:05:50.278 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:50.278 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:50.278 ++ LOGO=fedora-logo-icon 00:05:50.278 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:50.278 ++ HOME_URL=https://fedoraproject.org/ 00:05:50.278 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:50.278 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:50.278 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:50.278 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:50.278 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:50.278 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:50.278 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:50.278 ++ SUPPORT_END=2024-11-12 00:05:50.278 ++ VARIANT='Cloud Edition' 00:05:50.278 ++ VARIANT_ID=cloud 00:05:50.278 + uname -a 00:05:50.278 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:50.278 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:50.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.857 Hugepages 00:05:50.857 node hugesize free / total 00:05:50.857 node0 1048576kB 0 / 0 00:05:50.857 node0 2048kB 0 / 0 00:05:50.857 00:05:50.857 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:50.857 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:50.857 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:51.116 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:51.116 + rm -f /tmp/spdk-ld-path 00:05:51.116 + source autorun-spdk.conf 00:05:51.116 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:51.116 ++ SPDK_RUN_ASAN=1 00:05:51.116 ++ SPDK_RUN_UBSAN=1 00:05:51.116 ++ SPDK_TEST_RAID=1 00:05:51.116 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:51.116 ++ RUN_NIGHTLY=0 00:05:51.116 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:51.116 + [[ -n '' ]] 00:05:51.116 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:51.116 + for M in /var/spdk/build-*-manifest.txt 00:05:51.116 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:51.116 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:51.116 + for M in /var/spdk/build-*-manifest.txt 00:05:51.116 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:51.116 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:51.116 + for M in /var/spdk/build-*-manifest.txt 00:05:51.116 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:51.116 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:51.116 ++ uname 00:05:51.116 + [[ Linux == \L\i\n\u\x ]] 00:05:51.116 + sudo dmesg -T 00:05:51.116 + sudo dmesg --clear 00:05:51.116 + dmesg_pid=5260 00:05:51.116 + [[ Fedora Linux == FreeBSD ]] 00:05:51.116 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:51.116 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:51.116 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:51.116 + sudo dmesg -Tw 00:05:51.116 + [[ -x /usr/src/fio-static/fio ]] 00:05:51.116 + export FIO_BIN=/usr/src/fio-static/fio 00:05:51.116 + FIO_BIN=/usr/src/fio-static/fio 00:05:51.116 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:51.116 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:51.116 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:51.116 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:51.116 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:51.116 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:51.116 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:51.116 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:51.116 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:51.116 04:27:38 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:51.116 04:27:38 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:51.116 04:27:38 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:51.116 04:27:38 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:05:51.116 04:27:38 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:05:51.116 04:27:38 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:05:51.116 04:27:38 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:51.116 04:27:38 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:05:51.116 04:27:38 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:51.116 04:27:38 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:51.375 04:27:38 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:51.375 04:27:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.375 04:27:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:51.375 04:27:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:51.375 04:27:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.375 04:27:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.375 04:27:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.375 04:27:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.375 04:27:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.375 04:27:38 -- paths/export.sh@5 -- $ export PATH 00:05:51.375 04:27:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.375 04:27:38 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:51.375 04:27:38 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:51.375 04:27:38 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732681658.XXXXXX 00:05:51.375 04:27:38 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732681658.wTYHBD 00:05:51.375 04:27:38 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:51.375 04:27:38 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:51.375 04:27:38 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:51.375 04:27:38 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:51.375 04:27:38 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:51.375 04:27:38 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:51.375 04:27:38 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:51.375 04:27:38 -- common/autotest_common.sh@10 -- $ set +x 00:05:51.375 04:27:38 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:05:51.375 04:27:38 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:51.375 04:27:38 -- pm/common@17 -- $ local monitor 00:05:51.375 04:27:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:51.375 04:27:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:51.375 04:27:38 -- pm/common@25 -- $ sleep 1 00:05:51.375 04:27:38 -- pm/common@21 -- $ date +%s 00:05:51.375 04:27:38 -- pm/common@21 -- $ date +%s 00:05:51.375 04:27:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732681658 00:05:51.375 04:27:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732681658 00:05:51.375 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732681658_collect-cpu-load.pm.log 00:05:51.375 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732681658_collect-vmstat.pm.log 00:05:52.310 04:27:39 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:52.310 04:27:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:52.310 04:27:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:52.310 04:27:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:52.310 04:27:39 -- spdk/autobuild.sh@16 -- $ date -u 00:05:52.310 Wed Nov 27 04:27:39 AM UTC 2024 00:05:52.310 04:27:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:52.310 v25.01-pre-275-ga640d9f98 00:05:52.310 04:27:39 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:52.310 04:27:39 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:52.310 04:27:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:52.310 04:27:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:52.310 04:27:39 -- common/autotest_common.sh@10 -- $ set +x 00:05:52.310 ************************************ 00:05:52.310 START TEST asan 00:05:52.310 ************************************ 00:05:52.310 using asan 00:05:52.310 04:27:39 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:05:52.310 00:05:52.310 real 0m0.000s 00:05:52.310 user 0m0.000s 00:05:52.310 sys 0m0.000s 00:05:52.310 04:27:39 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:52.310 ************************************ 00:05:52.310 04:27:39 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:52.310 END TEST asan 00:05:52.310 ************************************ 00:05:52.310 04:27:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:52.310 04:27:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:52.310 04:27:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:52.310 04:27:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:52.310 04:27:39 -- common/autotest_common.sh@10 -- $ set +x 00:05:52.310 ************************************ 00:05:52.310 START TEST ubsan 00:05:52.310 ************************************ 00:05:52.310 using ubsan 00:05:52.310 04:27:39 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:52.310 00:05:52.310 real 0m0.000s 00:05:52.310 user 0m0.000s 00:05:52.310 sys 0m0.000s 00:05:52.310 04:27:39 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:52.310 ************************************ 00:05:52.310 END TEST ubsan 00:05:52.310 ************************************ 00:05:52.310 04:27:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:52.310 04:27:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:52.310 04:27:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:52.310 04:27:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:52.310 04:27:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:52.310 04:27:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:52.310 04:27:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:52.310 04:27:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:52.310 04:27:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:52.310 04:27:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:05:52.568 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:52.568 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:53.136 Using 'verbs' RDMA provider 00:06:06.318 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:18.513 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:18.770 Creating mk/config.mk...done. 00:06:18.770 Creating mk/cc.flags.mk...done. 00:06:18.770 Type 'make' to build. 00:06:18.770 04:28:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:18.770 04:28:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:18.770 04:28:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:18.770 04:28:06 -- common/autotest_common.sh@10 -- $ set +x 00:06:18.770 ************************************ 00:06:18.770 START TEST make 00:06:18.770 ************************************ 00:06:18.770 04:28:06 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:19.337 make[1]: Nothing to be done for 'all'. 00:06:37.507 The Meson build system 00:06:37.507 Version: 1.5.0 00:06:37.507 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:37.507 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:37.507 Build type: native build 00:06:37.507 Program cat found: YES (/usr/bin/cat) 00:06:37.507 Project name: DPDK 00:06:37.507 Project version: 24.03.0 00:06:37.507 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:37.507 C linker for the host machine: cc ld.bfd 2.40-14 00:06:37.507 Host machine cpu family: x86_64 00:06:37.507 Host machine cpu: x86_64 00:06:37.507 Message: ## Building in Developer Mode ## 00:06:37.507 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:37.507 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:37.507 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:37.507 Program python3 found: YES (/usr/bin/python3) 00:06:37.507 Program cat found: YES (/usr/bin/cat) 00:06:37.507 Compiler for C supports arguments -march=native: YES 00:06:37.507 Checking for size of "void *" : 8 00:06:37.507 Checking for size of "void *" : 8 (cached) 00:06:37.507 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:37.507 Library m found: YES 00:06:37.507 Library numa found: YES 00:06:37.507 Has header "numaif.h" : YES 00:06:37.507 Library fdt found: NO 00:06:37.507 Library execinfo found: NO 00:06:37.507 Has header "execinfo.h" : YES 00:06:37.507 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:37.507 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:37.507 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:37.507 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:37.507 Run-time dependency openssl found: YES 3.1.1 00:06:37.507 Run-time dependency libpcap found: YES 1.10.4 00:06:37.507 Has header "pcap.h" with dependency libpcap: YES 00:06:37.507 Compiler for C supports arguments -Wcast-qual: YES 00:06:37.507 Compiler for C supports arguments -Wdeprecated: YES 00:06:37.507 Compiler for C supports arguments -Wformat: YES 00:06:37.507 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:37.507 Compiler for C supports arguments -Wformat-security: NO 00:06:37.507 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:37.507 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:37.507 Compiler for C supports arguments -Wnested-externs: YES 00:06:37.507 Compiler for C supports arguments -Wold-style-definition: YES 00:06:37.507 Compiler for C supports arguments -Wpointer-arith: YES 00:06:37.507 Compiler for C supports arguments -Wsign-compare: YES 00:06:37.507 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:37.507 Compiler for C supports arguments -Wundef: YES 00:06:37.507 Compiler for C supports arguments -Wwrite-strings: YES 00:06:37.507 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:37.507 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:37.507 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:37.507 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:37.507 Program objdump found: YES (/usr/bin/objdump) 00:06:37.507 Compiler for C supports arguments -mavx512f: YES 00:06:37.507 Checking if "AVX512 checking" compiles: YES 00:06:37.507 Fetching value of define "__SSE4_2__" : 1 00:06:37.507 Fetching value of define "__AES__" : 1 00:06:37.507 Fetching value of define "__AVX__" : 1 00:06:37.507 Fetching value of define "__AVX2__" : 1 00:06:37.507 Fetching value of define "__AVX512BW__" : (undefined) 00:06:37.507 Fetching value of define "__AVX512CD__" : (undefined) 00:06:37.507 Fetching value of define "__AVX512DQ__" : (undefined) 00:06:37.507 Fetching value of define "__AVX512F__" : (undefined) 00:06:37.507 Fetching value of define "__AVX512VL__" : (undefined) 00:06:37.507 Fetching value of define "__PCLMUL__" : 1 00:06:37.507 Fetching value of define "__RDRND__" : 1 00:06:37.507 Fetching value of define "__RDSEED__" : 1 00:06:37.507 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:37.507 Fetching value of define "__znver1__" : (undefined) 00:06:37.507 Fetching value of define "__znver2__" : (undefined) 00:06:37.507 Fetching value of define "__znver3__" : (undefined) 00:06:37.507 Fetching value of define "__znver4__" : (undefined) 00:06:37.507 Library asan found: YES 00:06:37.507 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:37.507 Message: lib/log: Defining dependency "log" 00:06:37.507 Message: lib/kvargs: Defining dependency "kvargs" 00:06:37.507 Message: lib/telemetry: Defining dependency "telemetry" 00:06:37.507 Library rt found: YES 00:06:37.507 Checking for function "getentropy" : NO 00:06:37.507 Message: lib/eal: Defining dependency "eal" 00:06:37.507 Message: lib/ring: Defining dependency "ring" 00:06:37.507 Message: lib/rcu: Defining dependency "rcu" 00:06:37.507 Message: lib/mempool: Defining dependency "mempool" 00:06:37.507 Message: lib/mbuf: Defining dependency "mbuf" 00:06:37.507 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:37.507 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:06:37.507 Compiler for C supports arguments -mpclmul: YES 00:06:37.507 Compiler for C supports arguments -maes: YES 00:06:37.507 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:37.507 Compiler for C supports arguments -mavx512bw: YES 00:06:37.507 Compiler for C supports arguments -mavx512dq: YES 00:06:37.507 Compiler for C supports arguments -mavx512vl: YES 00:06:37.507 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:37.507 Compiler for C supports arguments -mavx2: YES 00:06:37.507 Compiler for C supports arguments -mavx: YES 00:06:37.507 Message: lib/net: Defining dependency "net" 00:06:37.507 Message: lib/meter: Defining dependency "meter" 00:06:37.508 Message: lib/ethdev: Defining dependency "ethdev" 00:06:37.508 Message: lib/pci: Defining dependency "pci" 00:06:37.508 Message: lib/cmdline: Defining dependency "cmdline" 00:06:37.508 Message: lib/hash: Defining dependency "hash" 00:06:37.508 Message: lib/timer: Defining dependency "timer" 00:06:37.508 Message: lib/compressdev: Defining dependency "compressdev" 00:06:37.508 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:37.508 Message: lib/dmadev: Defining dependency "dmadev" 00:06:37.508 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:37.508 Message: lib/power: Defining dependency "power" 00:06:37.508 Message: lib/reorder: Defining dependency "reorder" 00:06:37.508 Message: lib/security: Defining dependency "security" 00:06:37.508 Has header "linux/userfaultfd.h" : YES 00:06:37.508 Has header "linux/vduse.h" : YES 00:06:37.508 Message: lib/vhost: Defining dependency "vhost" 00:06:37.508 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:37.508 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:37.508 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:37.508 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:37.508 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:37.508 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:37.508 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:37.508 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:37.508 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:37.508 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:37.508 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:37.508 Configuring doxy-api-html.conf using configuration 00:06:37.508 Configuring doxy-api-man.conf using configuration 00:06:37.508 Program mandb found: YES (/usr/bin/mandb) 00:06:37.508 Program sphinx-build found: NO 00:06:37.508 Configuring rte_build_config.h using configuration 00:06:37.508 Message: 00:06:37.508 ================= 00:06:37.508 Applications Enabled 00:06:37.508 ================= 00:06:37.508 00:06:37.508 apps: 00:06:37.508 00:06:37.508 00:06:37.508 Message: 00:06:37.508 ================= 00:06:37.508 Libraries Enabled 00:06:37.508 ================= 00:06:37.508 00:06:37.508 libs: 00:06:37.508 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:37.508 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:37.508 cryptodev, dmadev, power, reorder, security, vhost, 00:06:37.508 00:06:37.508 Message: 00:06:37.508 =============== 00:06:37.508 Drivers Enabled 00:06:37.508 =============== 00:06:37.508 00:06:37.508 common: 00:06:37.508 00:06:37.508 bus: 00:06:37.508 pci, vdev, 00:06:37.508 mempool: 00:06:37.508 ring, 00:06:37.508 dma: 00:06:37.508 00:06:37.508 net: 00:06:37.508 00:06:37.508 crypto: 00:06:37.508 00:06:37.508 compress: 00:06:37.508 00:06:37.508 vdpa: 00:06:37.508 00:06:37.508 00:06:37.508 Message: 00:06:37.508 ================= 00:06:37.508 Content Skipped 00:06:37.508 ================= 00:06:37.508 00:06:37.508 apps: 00:06:37.508 dumpcap: explicitly disabled via build config 00:06:37.508 graph: explicitly disabled via build config 00:06:37.508 pdump: explicitly disabled via build config 00:06:37.508 proc-info: explicitly disabled via build config 00:06:37.508 test-acl: explicitly disabled via build config 00:06:37.508 test-bbdev: explicitly disabled via build config 00:06:37.508 test-cmdline: explicitly disabled via build config 00:06:37.508 test-compress-perf: explicitly disabled via build config 00:06:37.508 test-crypto-perf: explicitly disabled via build config 00:06:37.508 test-dma-perf: explicitly disabled via build config 00:06:37.508 test-eventdev: explicitly disabled via build config 00:06:37.508 test-fib: explicitly disabled via build config 00:06:37.508 test-flow-perf: explicitly disabled via build config 00:06:37.508 test-gpudev: explicitly disabled via build config 00:06:37.508 test-mldev: explicitly disabled via build config 00:06:37.508 test-pipeline: explicitly disabled via build config 00:06:37.508 test-pmd: explicitly disabled via build config 00:06:37.508 test-regex: explicitly disabled via build config 00:06:37.508 test-sad: explicitly disabled via build config 00:06:37.508 test-security-perf: explicitly disabled via build config 00:06:37.508 00:06:37.508 libs: 00:06:37.508 argparse: explicitly disabled via build config 00:06:37.508 metrics: explicitly disabled via build config 00:06:37.508 acl: explicitly disabled via build config 00:06:37.508 bbdev: explicitly disabled via build config 00:06:37.508 bitratestats: explicitly disabled via build config 00:06:37.508 bpf: explicitly disabled via build config 00:06:37.508 cfgfile: explicitly disabled via build config 00:06:37.508 distributor: explicitly disabled via build config 00:06:37.508 efd: explicitly disabled via build config 00:06:37.508 eventdev: explicitly disabled via build config 00:06:37.508 dispatcher: explicitly disabled via build config 00:06:37.508 gpudev: explicitly disabled via build config 00:06:37.508 gro: explicitly disabled via build config 00:06:37.508 gso: explicitly disabled via build config 00:06:37.508 ip_frag: explicitly disabled via build config 00:06:37.508 jobstats: explicitly disabled via build config 00:06:37.508 latencystats: explicitly disabled via build config 00:06:37.508 lpm: explicitly disabled via build config 00:06:37.508 member: explicitly disabled via build config 00:06:37.508 pcapng: explicitly disabled via build config 00:06:37.508 rawdev: explicitly disabled via build config 00:06:37.508 regexdev: explicitly disabled via build config 00:06:37.508 mldev: explicitly disabled via build config 00:06:37.508 rib: explicitly disabled via build config 00:06:37.508 sched: explicitly disabled via build config 00:06:37.508 stack: explicitly disabled via build config 00:06:37.508 ipsec: explicitly disabled via build config 00:06:37.508 pdcp: explicitly disabled via build config 00:06:37.508 fib: explicitly disabled via build config 00:06:37.508 port: explicitly disabled via build config 00:06:37.508 pdump: explicitly disabled via build config 00:06:37.508 table: explicitly disabled via build config 00:06:37.508 pipeline: explicitly disabled via build config 00:06:37.508 graph: explicitly disabled via build config 00:06:37.508 node: explicitly disabled via build config 00:06:37.508 00:06:37.508 drivers: 00:06:37.508 common/cpt: not in enabled drivers build config 00:06:37.508 common/dpaax: not in enabled drivers build config 00:06:37.508 common/iavf: not in enabled drivers build config 00:06:37.508 common/idpf: not in enabled drivers build config 00:06:37.508 common/ionic: not in enabled drivers build config 00:06:37.508 common/mvep: not in enabled drivers build config 00:06:37.508 common/octeontx: not in enabled drivers build config 00:06:37.508 bus/auxiliary: not in enabled drivers build config 00:06:37.508 bus/cdx: not in enabled drivers build config 00:06:37.508 bus/dpaa: not in enabled drivers build config 00:06:37.508 bus/fslmc: not in enabled drivers build config 00:06:37.508 bus/ifpga: not in enabled drivers build config 00:06:37.508 bus/platform: not in enabled drivers build config 00:06:37.508 bus/uacce: not in enabled drivers build config 00:06:37.508 bus/vmbus: not in enabled drivers build config 00:06:37.508 common/cnxk: not in enabled drivers build config 00:06:37.508 common/mlx5: not in enabled drivers build config 00:06:37.508 common/nfp: not in enabled drivers build config 00:06:37.508 common/nitrox: not in enabled drivers build config 00:06:37.508 common/qat: not in enabled drivers build config 00:06:37.508 common/sfc_efx: not in enabled drivers build config 00:06:37.508 mempool/bucket: not in enabled drivers build config 00:06:37.508 mempool/cnxk: not in enabled drivers build config 00:06:37.508 mempool/dpaa: not in enabled drivers build config 00:06:37.508 mempool/dpaa2: not in enabled drivers build config 00:06:37.508 mempool/octeontx: not in enabled drivers build config 00:06:37.508 mempool/stack: not in enabled drivers build config 00:06:37.508 dma/cnxk: not in enabled drivers build config 00:06:37.508 dma/dpaa: not in enabled drivers build config 00:06:37.508 dma/dpaa2: not in enabled drivers build config 00:06:37.508 dma/hisilicon: not in enabled drivers build config 00:06:37.508 dma/idxd: not in enabled drivers build config 00:06:37.508 dma/ioat: not in enabled drivers build config 00:06:37.508 dma/skeleton: not in enabled drivers build config 00:06:37.508 net/af_packet: not in enabled drivers build config 00:06:37.508 net/af_xdp: not in enabled drivers build config 00:06:37.508 net/ark: not in enabled drivers build config 00:06:37.508 net/atlantic: not in enabled drivers build config 00:06:37.508 net/avp: not in enabled drivers build config 00:06:37.508 net/axgbe: not in enabled drivers build config 00:06:37.508 net/bnx2x: not in enabled drivers build config 00:06:37.508 net/bnxt: not in enabled drivers build config 00:06:37.508 net/bonding: not in enabled drivers build config 00:06:37.508 net/cnxk: not in enabled drivers build config 00:06:37.508 net/cpfl: not in enabled drivers build config 00:06:37.508 net/cxgbe: not in enabled drivers build config 00:06:37.508 net/dpaa: not in enabled drivers build config 00:06:37.508 net/dpaa2: not in enabled drivers build config 00:06:37.508 net/e1000: not in enabled drivers build config 00:06:37.508 net/ena: not in enabled drivers build config 00:06:37.508 net/enetc: not in enabled drivers build config 00:06:37.508 net/enetfec: not in enabled drivers build config 00:06:37.508 net/enic: not in enabled drivers build config 00:06:37.508 net/failsafe: not in enabled drivers build config 00:06:37.508 net/fm10k: not in enabled drivers build config 00:06:37.508 net/gve: not in enabled drivers build config 00:06:37.509 net/hinic: not in enabled drivers build config 00:06:37.509 net/hns3: not in enabled drivers build config 00:06:37.509 net/i40e: not in enabled drivers build config 00:06:37.509 net/iavf: not in enabled drivers build config 00:06:37.509 net/ice: not in enabled drivers build config 00:06:37.509 net/idpf: not in enabled drivers build config 00:06:37.509 net/igc: not in enabled drivers build config 00:06:37.509 net/ionic: not in enabled drivers build config 00:06:37.509 net/ipn3ke: not in enabled drivers build config 00:06:37.509 net/ixgbe: not in enabled drivers build config 00:06:37.509 net/mana: not in enabled drivers build config 00:06:37.509 net/memif: not in enabled drivers build config 00:06:37.509 net/mlx4: not in enabled drivers build config 00:06:37.509 net/mlx5: not in enabled drivers build config 00:06:37.509 net/mvneta: not in enabled drivers build config 00:06:37.509 net/mvpp2: not in enabled drivers build config 00:06:37.509 net/netvsc: not in enabled drivers build config 00:06:37.509 net/nfb: not in enabled drivers build config 00:06:37.509 net/nfp: not in enabled drivers build config 00:06:37.509 net/ngbe: not in enabled drivers build config 00:06:37.509 net/null: not in enabled drivers build config 00:06:37.509 net/octeontx: not in enabled drivers build config 00:06:37.509 net/octeon_ep: not in enabled drivers build config 00:06:37.509 net/pcap: not in enabled drivers build config 00:06:37.509 net/pfe: not in enabled drivers build config 00:06:37.509 net/qede: not in enabled drivers build config 00:06:37.509 net/ring: not in enabled drivers build config 00:06:37.509 net/sfc: not in enabled drivers build config 00:06:37.509 net/softnic: not in enabled drivers build config 00:06:37.509 net/tap: not in enabled drivers build config 00:06:37.509 net/thunderx: not in enabled drivers build config 00:06:37.509 net/txgbe: not in enabled drivers build config 00:06:37.509 net/vdev_netvsc: not in enabled drivers build config 00:06:37.509 net/vhost: not in enabled drivers build config 00:06:37.509 net/virtio: not in enabled drivers build config 00:06:37.509 net/vmxnet3: not in enabled drivers build config 00:06:37.509 raw/*: missing internal dependency, "rawdev" 00:06:37.509 crypto/armv8: not in enabled drivers build config 00:06:37.509 crypto/bcmfs: not in enabled drivers build config 00:06:37.509 crypto/caam_jr: not in enabled drivers build config 00:06:37.509 crypto/ccp: not in enabled drivers build config 00:06:37.509 crypto/cnxk: not in enabled drivers build config 00:06:37.509 crypto/dpaa_sec: not in enabled drivers build config 00:06:37.509 crypto/dpaa2_sec: not in enabled drivers build config 00:06:37.509 crypto/ipsec_mb: not in enabled drivers build config 00:06:37.509 crypto/mlx5: not in enabled drivers build config 00:06:37.509 crypto/mvsam: not in enabled drivers build config 00:06:37.509 crypto/nitrox: not in enabled drivers build config 00:06:37.509 crypto/null: not in enabled drivers build config 00:06:37.509 crypto/octeontx: not in enabled drivers build config 00:06:37.509 crypto/openssl: not in enabled drivers build config 00:06:37.509 crypto/scheduler: not in enabled drivers build config 00:06:37.509 crypto/uadk: not in enabled drivers build config 00:06:37.509 crypto/virtio: not in enabled drivers build config 00:06:37.509 compress/isal: not in enabled drivers build config 00:06:37.509 compress/mlx5: not in enabled drivers build config 00:06:37.509 compress/nitrox: not in enabled drivers build config 00:06:37.509 compress/octeontx: not in enabled drivers build config 00:06:37.509 compress/zlib: not in enabled drivers build config 00:06:37.509 regex/*: missing internal dependency, "regexdev" 00:06:37.509 ml/*: missing internal dependency, "mldev" 00:06:37.509 vdpa/ifc: not in enabled drivers build config 00:06:37.509 vdpa/mlx5: not in enabled drivers build config 00:06:37.509 vdpa/nfp: not in enabled drivers build config 00:06:37.509 vdpa/sfc: not in enabled drivers build config 00:06:37.509 event/*: missing internal dependency, "eventdev" 00:06:37.509 baseband/*: missing internal dependency, "bbdev" 00:06:37.509 gpu/*: missing internal dependency, "gpudev" 00:06:37.509 00:06:37.509 00:06:37.509 Build targets in project: 85 00:06:37.509 00:06:37.509 DPDK 24.03.0 00:06:37.509 00:06:37.509 User defined options 00:06:37.509 buildtype : debug 00:06:37.509 default_library : shared 00:06:37.509 libdir : lib 00:06:37.509 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:37.509 b_sanitize : address 00:06:37.509 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:37.509 c_link_args : 00:06:37.509 cpu_instruction_set: native 00:06:37.509 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:37.509 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:37.509 enable_docs : false 00:06:37.509 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:37.509 enable_kmods : false 00:06:37.509 max_lcores : 128 00:06:37.509 tests : false 00:06:37.509 00:06:37.509 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:37.509 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:37.509 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:37.509 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:37.509 [3/268] Linking static target lib/librte_kvargs.a 00:06:37.509 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:37.509 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:37.509 [6/268] Linking static target lib/librte_log.a 00:06:37.766 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:37.766 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:37.766 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:38.024 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:38.024 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:38.024 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:38.024 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:38.282 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:38.282 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:38.541 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:38.541 [17/268] Linking static target lib/librte_telemetry.a 00:06:38.541 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:38.541 [19/268] Linking target lib/librte_log.so.24.1 00:06:38.799 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:38.799 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:38.799 [22/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:39.057 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:39.057 [24/268] Linking target lib/librte_kvargs.so.24.1 00:06:39.057 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:39.315 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:39.315 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:39.315 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:39.572 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:39.572 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:39.572 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:39.572 [32/268] Linking target lib/librte_telemetry.so.24.1 00:06:39.572 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:39.830 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:39.830 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:39.830 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:40.087 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:40.087 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:40.345 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:40.345 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:40.345 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:40.345 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:40.345 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:40.345 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:40.603 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:40.603 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:40.861 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:40.861 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:41.120 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:41.378 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:41.378 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:41.378 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:41.378 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:41.636 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:41.636 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:41.636 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:41.893 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:41.893 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:42.151 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:42.151 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:42.151 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:42.151 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:42.409 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:42.409 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:42.409 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:42.668 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:42.668 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:42.927 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:42.927 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:42.927 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:43.185 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:43.185 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:43.185 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:43.185 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:43.443 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:43.443 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:43.701 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:43.701 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:43.701 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:43.701 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:43.701 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:43.960 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:43.960 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:43.960 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:43.960 [85/268] Linking static target lib/librte_ring.a 00:06:44.219 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:44.478 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:44.478 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:44.478 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:44.478 [90/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:44.478 [91/268] Linking static target lib/librte_eal.a 00:06:44.738 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:44.738 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:44.738 [94/268] Linking static target lib/librte_mempool.a 00:06:44.738 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:44.997 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:44.997 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:44.997 [98/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:44.997 [99/268] Linking static target lib/librte_rcu.a 00:06:45.256 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:45.514 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:45.514 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:45.514 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:45.514 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:45.514 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:45.514 [106/268] Linking static target lib/librte_mbuf.a 00:06:45.514 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:45.773 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:45.773 [109/268] Linking static target lib/librte_net.a 00:06:45.773 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:45.773 [111/268] Linking static target lib/librte_meter.a 00:06:46.032 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:46.291 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:46.291 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:46.291 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:46.291 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:46.291 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:46.550 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:46.550 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:47.115 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:47.115 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:47.115 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:47.375 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:47.634 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:47.634 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:47.893 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:47.893 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:47.893 [128/268] Linking static target lib/librte_pci.a 00:06:47.893 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:47.893 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:47.893 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:47.893 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:47.893 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:48.151 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:48.151 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:48.151 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:48.151 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:48.151 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:48.410 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:48.410 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:48.410 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:48.410 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:48.410 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:48.410 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:48.410 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:48.669 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:48.669 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:48.669 [148/268] Linking static target lib/librte_cmdline.a 00:06:48.928 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:49.186 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:49.186 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:49.444 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:49.444 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:49.703 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:49.703 [155/268] Linking static target lib/librte_timer.a 00:06:49.960 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:49.960 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:49.960 [158/268] Linking static target lib/librte_ethdev.a 00:06:49.960 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:49.960 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:49.960 [161/268] Linking static target lib/librte_hash.a 00:06:50.219 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:50.219 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:50.219 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:50.219 [165/268] Linking static target lib/librte_compressdev.a 00:06:50.477 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:50.477 [167/268] Linking static target lib/librte_dmadev.a 00:06:50.477 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:50.477 [169/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:50.736 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:50.736 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:50.994 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:50.994 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:51.252 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:51.252 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:51.510 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:51.510 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:51.510 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:51.510 [179/268] Linking static target lib/librte_cryptodev.a 00:06:51.510 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:51.510 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:51.510 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:51.785 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:52.047 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:52.306 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:52.565 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:52.565 [187/268] Linking static target lib/librte_power.a 00:06:52.565 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:52.565 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:52.824 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:52.824 [191/268] Linking static target lib/librte_security.a 00:06:52.824 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:53.082 [193/268] Linking static target lib/librte_reorder.a 00:06:53.341 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:53.599 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:53.599 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:53.857 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:53.857 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:53.857 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:54.115 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.373 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:54.373 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:54.631 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:54.631 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:54.631 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:54.888 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:54.888 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:55.146 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:55.146 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:55.146 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:55.146 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:55.405 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:55.405 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:55.405 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:55.405 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:55.405 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:55.405 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:55.405 [218/268] Linking static target drivers/librte_bus_vdev.a 00:06:55.663 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:55.663 [220/268] Linking static target drivers/librte_bus_pci.a 00:06:55.663 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:55.663 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:55.920 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:55.920 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:55.921 [225/268] Linking static target drivers/librte_mempool_ring.a 00:06:55.921 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:56.178 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:57.135 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:57.135 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:57.135 [230/268] Linking target lib/librte_eal.so.24.1 00:06:57.392 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:57.392 [232/268] Linking target lib/librte_ring.so.24.1 00:06:57.392 [233/268] Linking target lib/librte_pci.so.24.1 00:06:57.392 [234/268] Linking target lib/librte_timer.so.24.1 00:06:57.392 [235/268] Linking target lib/librte_meter.so.24.1 00:06:57.392 [236/268] Linking target lib/librte_dmadev.so.24.1 00:06:57.392 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:57.650 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:57.650 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:57.650 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:57.650 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:57.650 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:57.650 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:57.650 [244/268] Linking target lib/librte_rcu.so.24.1 00:06:57.650 [245/268] Linking target lib/librte_mempool.so.24.1 00:06:57.907 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:57.907 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:57.907 [248/268] Linking target lib/librte_mbuf.so.24.1 00:06:57.907 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:57.907 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:58.165 [251/268] Linking target lib/librte_net.so.24.1 00:06:58.165 [252/268] Linking target lib/librte_compressdev.so.24.1 00:06:58.165 [253/268] Linking target lib/librte_reorder.so.24.1 00:06:58.165 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:06:58.165 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:58.165 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:58.165 [257/268] Linking target lib/librte_security.so.24.1 00:06:58.423 [258/268] Linking target lib/librte_cmdline.so.24.1 00:06:58.424 [259/268] Linking target lib/librte_hash.so.24.1 00:06:58.424 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:58.685 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:58.685 [262/268] Linking target lib/librte_ethdev.so.24.1 00:06:58.685 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:58.942 [264/268] Linking target lib/librte_power.so.24.1 00:07:01.470 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:01.470 [266/268] Linking static target lib/librte_vhost.a 00:07:02.843 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.843 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:02.843 INFO: autodetecting backend as ninja 00:07:02.843 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:29.375 CC lib/ut/ut.o 00:07:29.375 CC lib/log/log_flags.o 00:07:29.375 CC lib/log/log_deprecated.o 00:07:29.375 CC lib/log/log.o 00:07:29.375 CC lib/ut_mock/mock.o 00:07:29.375 LIB libspdk_ut_mock.a 00:07:29.375 LIB libspdk_log.a 00:07:29.375 SO libspdk_ut_mock.so.6.0 00:07:29.375 LIB libspdk_ut.a 00:07:29.375 SO libspdk_log.so.7.1 00:07:29.375 SO libspdk_ut.so.2.0 00:07:29.375 SYMLINK libspdk_ut_mock.so 00:07:29.375 SYMLINK libspdk_log.so 00:07:29.375 SYMLINK libspdk_ut.so 00:07:29.375 CXX lib/trace_parser/trace.o 00:07:29.375 CC lib/util/base64.o 00:07:29.375 CC lib/util/bit_array.o 00:07:29.375 CC lib/util/cpuset.o 00:07:29.375 CC lib/util/crc16.o 00:07:29.375 CC lib/util/crc32.o 00:07:29.375 CC lib/dma/dma.o 00:07:29.375 CC lib/ioat/ioat.o 00:07:29.375 CC lib/util/crc32c.o 00:07:29.375 CC lib/vfio_user/host/vfio_user_pci.o 00:07:29.375 CC lib/util/crc32_ieee.o 00:07:29.375 CC lib/util/crc64.o 00:07:29.375 CC lib/util/dif.o 00:07:29.375 CC lib/vfio_user/host/vfio_user.o 00:07:29.375 CC lib/util/fd.o 00:07:29.375 CC lib/util/fd_group.o 00:07:29.375 LIB libspdk_dma.a 00:07:29.375 SO libspdk_dma.so.5.0 00:07:29.375 CC lib/util/file.o 00:07:29.375 CC lib/util/hexlify.o 00:07:29.375 SYMLINK libspdk_dma.so 00:07:29.375 CC lib/util/iov.o 00:07:29.375 CC lib/util/math.o 00:07:29.375 CC lib/util/net.o 00:07:29.375 LIB libspdk_ioat.a 00:07:29.375 SO libspdk_ioat.so.7.0 00:07:29.375 CC lib/util/pipe.o 00:07:29.375 CC lib/util/strerror_tls.o 00:07:29.375 LIB libspdk_vfio_user.a 00:07:29.375 SO libspdk_vfio_user.so.5.0 00:07:29.375 CC lib/util/string.o 00:07:29.375 SYMLINK libspdk_ioat.so 00:07:29.375 CC lib/util/uuid.o 00:07:29.375 SYMLINK libspdk_vfio_user.so 00:07:29.375 CC lib/util/xor.o 00:07:29.375 CC lib/util/zipf.o 00:07:29.375 CC lib/util/md5.o 00:07:29.375 LIB libspdk_util.a 00:07:29.375 SO libspdk_util.so.10.1 00:07:29.375 LIB libspdk_trace_parser.a 00:07:29.375 SO libspdk_trace_parser.so.6.0 00:07:29.375 SYMLINK libspdk_util.so 00:07:29.375 SYMLINK libspdk_trace_parser.so 00:07:29.375 CC lib/vmd/led.o 00:07:29.375 CC lib/vmd/vmd.o 00:07:29.375 CC lib/idxd/idxd.o 00:07:29.375 CC lib/idxd/idxd_user.o 00:07:29.375 CC lib/idxd/idxd_kernel.o 00:07:29.375 CC lib/rdma_utils/rdma_utils.o 00:07:29.375 CC lib/json/json_parse.o 00:07:29.375 CC lib/json/json_util.o 00:07:29.375 CC lib/conf/conf.o 00:07:29.375 CC lib/env_dpdk/env.o 00:07:29.375 CC lib/env_dpdk/memory.o 00:07:29.375 CC lib/env_dpdk/pci.o 00:07:29.375 CC lib/json/json_write.o 00:07:29.375 LIB libspdk_conf.a 00:07:29.375 CC lib/env_dpdk/init.o 00:07:29.375 SO libspdk_conf.so.6.0 00:07:29.375 CC lib/env_dpdk/threads.o 00:07:29.375 LIB libspdk_rdma_utils.a 00:07:29.633 SO libspdk_rdma_utils.so.1.0 00:07:29.633 SYMLINK libspdk_conf.so 00:07:29.633 CC lib/env_dpdk/pci_ioat.o 00:07:29.633 SYMLINK libspdk_rdma_utils.so 00:07:29.633 CC lib/env_dpdk/pci_virtio.o 00:07:29.891 CC lib/env_dpdk/pci_vmd.o 00:07:29.891 CC lib/env_dpdk/pci_idxd.o 00:07:29.891 CC lib/env_dpdk/pci_event.o 00:07:29.891 LIB libspdk_vmd.a 00:07:29.891 CC lib/env_dpdk/sigbus_handler.o 00:07:29.891 LIB libspdk_json.a 00:07:29.891 SO libspdk_vmd.so.6.0 00:07:29.891 SO libspdk_json.so.6.0 00:07:29.891 CC lib/env_dpdk/pci_dpdk.o 00:07:29.891 SYMLINK libspdk_vmd.so 00:07:30.148 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:30.148 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:30.148 SYMLINK libspdk_json.so 00:07:30.452 CC lib/rdma_provider/common.o 00:07:30.452 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:30.452 CC lib/jsonrpc/jsonrpc_server.o 00:07:30.452 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:30.452 CC lib/jsonrpc/jsonrpc_client.o 00:07:30.452 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:30.452 LIB libspdk_idxd.a 00:07:30.452 SO libspdk_idxd.so.12.1 00:07:30.452 SYMLINK libspdk_idxd.so 00:07:30.709 LIB libspdk_rdma_provider.a 00:07:30.709 SO libspdk_rdma_provider.so.7.0 00:07:30.709 SYMLINK libspdk_rdma_provider.so 00:07:30.972 LIB libspdk_jsonrpc.a 00:07:30.972 SO libspdk_jsonrpc.so.6.0 00:07:30.972 SYMLINK libspdk_jsonrpc.so 00:07:31.229 CC lib/rpc/rpc.o 00:07:31.487 LIB libspdk_rpc.a 00:07:31.487 SO libspdk_rpc.so.6.0 00:07:31.745 LIB libspdk_env_dpdk.a 00:07:31.745 SYMLINK libspdk_rpc.so 00:07:31.745 SO libspdk_env_dpdk.so.15.1 00:07:32.002 CC lib/keyring/keyring.o 00:07:32.002 CC lib/keyring/keyring_rpc.o 00:07:32.002 CC lib/trace/trace.o 00:07:32.002 CC lib/trace/trace_flags.o 00:07:32.002 CC lib/trace/trace_rpc.o 00:07:32.002 CC lib/notify/notify.o 00:07:32.002 CC lib/notify/notify_rpc.o 00:07:32.002 SYMLINK libspdk_env_dpdk.so 00:07:32.259 LIB libspdk_notify.a 00:07:32.259 SO libspdk_notify.so.6.0 00:07:32.259 SYMLINK libspdk_notify.so 00:07:32.517 LIB libspdk_keyring.a 00:07:32.517 LIB libspdk_trace.a 00:07:32.517 SO libspdk_keyring.so.2.0 00:07:32.517 SO libspdk_trace.so.11.0 00:07:32.518 SYMLINK libspdk_trace.so 00:07:32.518 SYMLINK libspdk_keyring.so 00:07:32.775 CC lib/sock/sock.o 00:07:32.775 CC lib/sock/sock_rpc.o 00:07:32.775 CC lib/thread/thread.o 00:07:32.775 CC lib/thread/iobuf.o 00:07:33.339 LIB libspdk_sock.a 00:07:33.339 SO libspdk_sock.so.10.0 00:07:33.339 SYMLINK libspdk_sock.so 00:07:33.643 CC lib/nvme/nvme_fabric.o 00:07:33.643 CC lib/nvme/nvme_ctrlr.o 00:07:33.643 CC lib/nvme/nvme_ns_cmd.o 00:07:33.643 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:33.643 CC lib/nvme/nvme_ns.o 00:07:33.643 CC lib/nvme/nvme_qpair.o 00:07:33.643 CC lib/nvme/nvme_pcie.o 00:07:33.643 CC lib/nvme/nvme_pcie_common.o 00:07:33.643 CC lib/nvme/nvme.o 00:07:34.575 CC lib/nvme/nvme_quirks.o 00:07:34.832 CC lib/nvme/nvme_transport.o 00:07:34.832 CC lib/nvme/nvme_discovery.o 00:07:34.832 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:35.089 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:35.347 CC lib/nvme/nvme_tcp.o 00:07:35.347 CC lib/nvme/nvme_opal.o 00:07:35.347 LIB libspdk_thread.a 00:07:35.605 SO libspdk_thread.so.11.0 00:07:35.605 CC lib/nvme/nvme_io_msg.o 00:07:35.605 CC lib/nvme/nvme_poll_group.o 00:07:35.605 CC lib/nvme/nvme_zns.o 00:07:35.605 SYMLINK libspdk_thread.so 00:07:35.605 CC lib/nvme/nvme_stubs.o 00:07:35.863 CC lib/nvme/nvme_auth.o 00:07:35.863 CC lib/nvme/nvme_cuse.o 00:07:36.120 CC lib/accel/accel.o 00:07:36.378 CC lib/nvme/nvme_rdma.o 00:07:36.378 CC lib/blob/blobstore.o 00:07:36.378 CC lib/blob/request.o 00:07:36.944 CC lib/blob/zeroes.o 00:07:36.944 CC lib/init/json_config.o 00:07:36.944 CC lib/init/subsystem.o 00:07:37.202 CC lib/accel/accel_rpc.o 00:07:37.202 CC lib/blob/blob_bs_dev.o 00:07:37.202 CC lib/accel/accel_sw.o 00:07:37.202 CC lib/init/subsystem_rpc.o 00:07:37.202 CC lib/init/rpc.o 00:07:37.461 LIB libspdk_init.a 00:07:37.461 SO libspdk_init.so.6.0 00:07:37.461 CC lib/virtio/virtio.o 00:07:37.461 CC lib/virtio/virtio_vhost_user.o 00:07:37.719 SYMLINK libspdk_init.so 00:07:37.719 CC lib/virtio/virtio_vfio_user.o 00:07:37.719 CC lib/fsdev/fsdev.o 00:07:37.719 CC lib/virtio/virtio_pci.o 00:07:37.719 CC lib/event/app.o 00:07:37.977 CC lib/event/reactor.o 00:07:37.977 CC lib/event/log_rpc.o 00:07:37.977 CC lib/event/app_rpc.o 00:07:38.235 CC lib/event/scheduler_static.o 00:07:38.235 CC lib/fsdev/fsdev_io.o 00:07:38.235 LIB libspdk_nvme.a 00:07:38.235 LIB libspdk_virtio.a 00:07:38.235 SO libspdk_virtio.so.7.0 00:07:38.235 CC lib/fsdev/fsdev_rpc.o 00:07:38.494 LIB libspdk_accel.a 00:07:38.494 SO libspdk_nvme.so.15.0 00:07:38.494 SYMLINK libspdk_virtio.so 00:07:38.494 SO libspdk_accel.so.16.0 00:07:38.494 LIB libspdk_event.a 00:07:38.494 SYMLINK libspdk_accel.so 00:07:38.752 SO libspdk_event.so.14.0 00:07:38.752 SYMLINK libspdk_event.so 00:07:38.752 SYMLINK libspdk_nvme.so 00:07:38.752 CC lib/bdev/bdev_rpc.o 00:07:38.752 CC lib/bdev/bdev.o 00:07:38.752 CC lib/bdev/bdev_zone.o 00:07:38.752 CC lib/bdev/scsi_nvme.o 00:07:38.752 CC lib/bdev/part.o 00:07:39.010 LIB libspdk_fsdev.a 00:07:39.010 SO libspdk_fsdev.so.2.0 00:07:39.268 SYMLINK libspdk_fsdev.so 00:07:39.525 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:40.473 LIB libspdk_fuse_dispatcher.a 00:07:40.473 SO libspdk_fuse_dispatcher.so.1.0 00:07:40.473 SYMLINK libspdk_fuse_dispatcher.so 00:07:42.370 LIB libspdk_blob.a 00:07:42.370 SO libspdk_blob.so.12.0 00:07:42.370 SYMLINK libspdk_blob.so 00:07:42.370 CC lib/blobfs/blobfs.o 00:07:42.370 CC lib/blobfs/tree.o 00:07:42.370 CC lib/lvol/lvol.o 00:07:42.938 LIB libspdk_bdev.a 00:07:42.938 SO libspdk_bdev.so.17.0 00:07:43.196 SYMLINK libspdk_bdev.so 00:07:43.472 CC lib/nvmf/ctrlr_discovery.o 00:07:43.472 CC lib/nvmf/ctrlr.o 00:07:43.472 CC lib/nvmf/ctrlr_bdev.o 00:07:43.472 CC lib/nvmf/subsystem.o 00:07:43.472 CC lib/scsi/dev.o 00:07:43.472 CC lib/ftl/ftl_core.o 00:07:43.472 CC lib/nbd/nbd.o 00:07:43.472 CC lib/ublk/ublk.o 00:07:43.472 LIB libspdk_blobfs.a 00:07:43.760 SO libspdk_blobfs.so.11.0 00:07:43.760 SYMLINK libspdk_blobfs.so 00:07:43.760 CC lib/nbd/nbd_rpc.o 00:07:43.760 LIB libspdk_lvol.a 00:07:43.760 SO libspdk_lvol.so.11.0 00:07:43.760 CC lib/scsi/lun.o 00:07:44.017 CC lib/scsi/port.o 00:07:44.017 LIB libspdk_nbd.a 00:07:44.017 SYMLINK libspdk_lvol.so 00:07:44.017 CC lib/scsi/scsi.o 00:07:44.017 SO libspdk_nbd.so.7.0 00:07:44.017 SYMLINK libspdk_nbd.so 00:07:44.017 CC lib/ublk/ublk_rpc.o 00:07:44.017 CC lib/nvmf/nvmf.o 00:07:44.017 CC lib/nvmf/nvmf_rpc.o 00:07:44.017 CC lib/nvmf/transport.o 00:07:44.276 CC lib/ftl/ftl_init.o 00:07:44.276 CC lib/ftl/ftl_layout.o 00:07:44.276 CC lib/scsi/scsi_bdev.o 00:07:44.533 CC lib/scsi/scsi_pr.o 00:07:44.533 LIB libspdk_ublk.a 00:07:44.533 CC lib/ftl/ftl_debug.o 00:07:44.790 SO libspdk_ublk.so.3.0 00:07:44.790 CC lib/scsi/scsi_rpc.o 00:07:44.790 SYMLINK libspdk_ublk.so 00:07:44.790 CC lib/scsi/task.o 00:07:44.790 CC lib/ftl/ftl_io.o 00:07:45.048 CC lib/ftl/ftl_sb.o 00:07:45.048 CC lib/nvmf/tcp.o 00:07:45.048 CC lib/ftl/ftl_l2p.o 00:07:45.048 CC lib/ftl/ftl_l2p_flat.o 00:07:45.048 CC lib/nvmf/stubs.o 00:07:45.048 CC lib/ftl/ftl_nv_cache.o 00:07:45.307 CC lib/nvmf/mdns_server.o 00:07:45.307 LIB libspdk_scsi.a 00:07:45.307 CC lib/nvmf/rdma.o 00:07:45.307 CC lib/nvmf/auth.o 00:07:45.307 SO libspdk_scsi.so.9.0 00:07:45.307 CC lib/ftl/ftl_band.o 00:07:45.307 CC lib/ftl/ftl_band_ops.o 00:07:45.307 SYMLINK libspdk_scsi.so 00:07:45.565 CC lib/iscsi/conn.o 00:07:45.822 CC lib/ftl/ftl_writer.o 00:07:45.822 CC lib/ftl/ftl_rq.o 00:07:45.822 CC lib/iscsi/init_grp.o 00:07:45.822 CC lib/ftl/ftl_reloc.o 00:07:46.079 CC lib/ftl/ftl_l2p_cache.o 00:07:46.079 CC lib/ftl/ftl_p2l.o 00:07:46.079 CC lib/vhost/vhost.o 00:07:46.337 CC lib/iscsi/iscsi.o 00:07:46.337 CC lib/iscsi/param.o 00:07:46.337 CC lib/ftl/ftl_p2l_log.o 00:07:46.595 CC lib/vhost/vhost_rpc.o 00:07:46.595 CC lib/vhost/vhost_scsi.o 00:07:46.853 CC lib/vhost/vhost_blk.o 00:07:46.853 CC lib/vhost/rte_vhost_user.o 00:07:46.853 CC lib/ftl/mngt/ftl_mngt.o 00:07:46.853 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:47.422 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:47.422 CC lib/iscsi/portal_grp.o 00:07:47.422 CC lib/iscsi/tgt_node.o 00:07:47.422 CC lib/iscsi/iscsi_subsystem.o 00:07:47.422 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:47.422 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:47.683 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:47.683 CC lib/iscsi/iscsi_rpc.o 00:07:47.683 CC lib/iscsi/task.o 00:07:47.941 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:47.941 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:47.941 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:47.941 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:48.199 LIB libspdk_vhost.a 00:07:48.199 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:48.199 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:48.199 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:48.199 SO libspdk_vhost.so.8.0 00:07:48.199 CC lib/ftl/utils/ftl_conf.o 00:07:48.199 CC lib/ftl/utils/ftl_md.o 00:07:48.199 LIB libspdk_iscsi.a 00:07:48.199 SYMLINK libspdk_vhost.so 00:07:48.199 CC lib/ftl/utils/ftl_mempool.o 00:07:48.199 CC lib/ftl/utils/ftl_bitmap.o 00:07:48.457 CC lib/ftl/utils/ftl_property.o 00:07:48.457 SO libspdk_iscsi.so.8.0 00:07:48.457 LIB libspdk_nvmf.a 00:07:48.457 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:48.457 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:48.457 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:48.457 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:48.457 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:48.457 SYMLINK libspdk_iscsi.so 00:07:48.457 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:48.457 SO libspdk_nvmf.so.20.0 00:07:48.716 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:48.716 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:48.716 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:48.716 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:48.716 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:48.716 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:48.716 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:48.716 CC lib/ftl/base/ftl_base_dev.o 00:07:48.716 SYMLINK libspdk_nvmf.so 00:07:48.716 CC lib/ftl/base/ftl_base_bdev.o 00:07:48.974 CC lib/ftl/ftl_trace.o 00:07:49.233 LIB libspdk_ftl.a 00:07:49.492 SO libspdk_ftl.so.9.0 00:07:49.751 SYMLINK libspdk_ftl.so 00:07:50.010 CC module/env_dpdk/env_dpdk_rpc.o 00:07:50.268 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:50.268 CC module/fsdev/aio/fsdev_aio.o 00:07:50.268 CC module/accel/dsa/accel_dsa.o 00:07:50.268 CC module/accel/ioat/accel_ioat.o 00:07:50.268 CC module/accel/iaa/accel_iaa.o 00:07:50.268 CC module/accel/error/accel_error.o 00:07:50.268 CC module/sock/posix/posix.o 00:07:50.268 CC module/keyring/file/keyring.o 00:07:50.268 CC module/blob/bdev/blob_bdev.o 00:07:50.268 LIB libspdk_env_dpdk_rpc.a 00:07:50.268 SO libspdk_env_dpdk_rpc.so.6.0 00:07:50.268 SYMLINK libspdk_env_dpdk_rpc.so 00:07:50.525 CC module/keyring/file/keyring_rpc.o 00:07:50.525 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:50.525 LIB libspdk_scheduler_dynamic.a 00:07:50.525 CC module/accel/iaa/accel_iaa_rpc.o 00:07:50.525 CC module/accel/ioat/accel_ioat_rpc.o 00:07:50.525 CC module/accel/error/accel_error_rpc.o 00:07:50.525 SO libspdk_scheduler_dynamic.so.4.0 00:07:50.525 SYMLINK libspdk_scheduler_dynamic.so 00:07:50.525 LIB libspdk_keyring_file.a 00:07:50.525 CC module/fsdev/aio/linux_aio_mgr.o 00:07:50.525 SO libspdk_keyring_file.so.2.0 00:07:50.525 LIB libspdk_blob_bdev.a 00:07:50.525 LIB libspdk_accel_iaa.a 00:07:50.525 CC module/accel/dsa/accel_dsa_rpc.o 00:07:50.525 LIB libspdk_accel_ioat.a 00:07:50.525 LIB libspdk_accel_error.a 00:07:50.525 SO libspdk_blob_bdev.so.12.0 00:07:50.525 SO libspdk_accel_iaa.so.3.0 00:07:50.783 SO libspdk_accel_ioat.so.6.0 00:07:50.783 SO libspdk_accel_error.so.2.0 00:07:50.783 SYMLINK libspdk_keyring_file.so 00:07:50.783 SYMLINK libspdk_blob_bdev.so 00:07:50.783 SYMLINK libspdk_accel_ioat.so 00:07:50.783 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:50.783 SYMLINK libspdk_accel_error.so 00:07:50.783 SYMLINK libspdk_accel_iaa.so 00:07:50.783 LIB libspdk_accel_dsa.a 00:07:50.783 SO libspdk_accel_dsa.so.5.0 00:07:50.783 SYMLINK libspdk_accel_dsa.so 00:07:50.783 CC module/keyring/linux/keyring.o 00:07:51.042 LIB libspdk_scheduler_dpdk_governor.a 00:07:51.042 CC module/scheduler/gscheduler/gscheduler.o 00:07:51.042 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:51.042 CC module/bdev/gpt/gpt.o 00:07:51.042 CC module/bdev/error/vbdev_error.o 00:07:51.042 CC module/bdev/delay/vbdev_delay.o 00:07:51.042 CC module/blobfs/bdev/blobfs_bdev.o 00:07:51.042 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:51.042 CC module/bdev/gpt/vbdev_gpt.o 00:07:51.042 CC module/keyring/linux/keyring_rpc.o 00:07:51.042 CC module/bdev/lvol/vbdev_lvol.o 00:07:51.042 LIB libspdk_scheduler_gscheduler.a 00:07:51.042 SO libspdk_scheduler_gscheduler.so.4.0 00:07:51.042 LIB libspdk_fsdev_aio.a 00:07:51.300 LIB libspdk_sock_posix.a 00:07:51.300 SO libspdk_fsdev_aio.so.1.0 00:07:51.300 LIB libspdk_keyring_linux.a 00:07:51.300 SYMLINK libspdk_scheduler_gscheduler.so 00:07:51.300 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:51.300 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:51.300 CC module/bdev/error/vbdev_error_rpc.o 00:07:51.300 SO libspdk_sock_posix.so.6.0 00:07:51.300 SO libspdk_keyring_linux.so.1.0 00:07:51.300 SYMLINK libspdk_fsdev_aio.so 00:07:51.300 SYMLINK libspdk_keyring_linux.so 00:07:51.300 SYMLINK libspdk_sock_posix.so 00:07:51.300 LIB libspdk_bdev_gpt.a 00:07:51.300 LIB libspdk_bdev_error.a 00:07:51.558 SO libspdk_bdev_gpt.so.6.0 00:07:51.558 LIB libspdk_blobfs_bdev.a 00:07:51.558 SO libspdk_bdev_error.so.6.0 00:07:51.558 CC module/bdev/malloc/bdev_malloc.o 00:07:51.558 SO libspdk_blobfs_bdev.so.6.0 00:07:51.558 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:51.558 CC module/bdev/null/bdev_null.o 00:07:51.558 CC module/bdev/passthru/vbdev_passthru.o 00:07:51.558 CC module/bdev/nvme/bdev_nvme.o 00:07:51.558 SYMLINK libspdk_bdev_gpt.so 00:07:51.558 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:51.558 SYMLINK libspdk_bdev_error.so 00:07:51.558 SYMLINK libspdk_blobfs_bdev.so 00:07:51.558 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:51.558 CC module/bdev/nvme/nvme_rpc.o 00:07:51.819 LIB libspdk_bdev_delay.a 00:07:51.819 CC module/bdev/nvme/bdev_mdns_client.o 00:07:51.819 CC module/bdev/null/bdev_null_rpc.o 00:07:51.819 SO libspdk_bdev_delay.so.6.0 00:07:51.819 LIB libspdk_bdev_lvol.a 00:07:51.819 SO libspdk_bdev_lvol.so.6.0 00:07:51.819 SYMLINK libspdk_bdev_delay.so 00:07:51.819 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:51.819 CC module/bdev/nvme/vbdev_opal.o 00:07:51.819 LIB libspdk_bdev_passthru.a 00:07:51.819 SYMLINK libspdk_bdev_lvol.so 00:07:51.819 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:51.819 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:51.819 SO libspdk_bdev_passthru.so.6.0 00:07:52.159 LIB libspdk_bdev_null.a 00:07:52.160 LIB libspdk_bdev_malloc.a 00:07:52.160 SO libspdk_bdev_null.so.6.0 00:07:52.160 SYMLINK libspdk_bdev_passthru.so 00:07:52.160 SO libspdk_bdev_malloc.so.6.0 00:07:52.160 SYMLINK libspdk_bdev_null.so 00:07:52.160 CC module/bdev/raid/bdev_raid.o 00:07:52.160 SYMLINK libspdk_bdev_malloc.so 00:07:52.160 CC module/bdev/raid/bdev_raid_rpc.o 00:07:52.160 CC module/bdev/split/vbdev_split.o 00:07:52.160 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:52.418 CC module/bdev/aio/bdev_aio.o 00:07:52.418 CC module/bdev/iscsi/bdev_iscsi.o 00:07:52.418 CC module/bdev/ftl/bdev_ftl.o 00:07:52.418 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:52.418 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:52.418 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:52.418 CC module/bdev/split/vbdev_split_rpc.o 00:07:52.676 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:52.676 LIB libspdk_bdev_split.a 00:07:52.676 LIB libspdk_bdev_zone_block.a 00:07:52.676 CC module/bdev/aio/bdev_aio_rpc.o 00:07:52.676 SO libspdk_bdev_split.so.6.0 00:07:52.676 SO libspdk_bdev_zone_block.so.6.0 00:07:52.676 LIB libspdk_bdev_ftl.a 00:07:52.676 SO libspdk_bdev_ftl.so.6.0 00:07:52.676 SYMLINK libspdk_bdev_split.so 00:07:52.676 SYMLINK libspdk_bdev_zone_block.so 00:07:52.676 CC module/bdev/raid/bdev_raid_sb.o 00:07:52.676 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:52.676 CC module/bdev/raid/raid0.o 00:07:52.676 CC module/bdev/raid/raid1.o 00:07:52.676 SYMLINK libspdk_bdev_ftl.so 00:07:52.676 LIB libspdk_bdev_iscsi.a 00:07:52.676 CC module/bdev/raid/concat.o 00:07:52.935 SO libspdk_bdev_iscsi.so.6.0 00:07:52.935 LIB libspdk_bdev_aio.a 00:07:52.935 SYMLINK libspdk_bdev_iscsi.so 00:07:52.935 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:52.935 SO libspdk_bdev_aio.so.6.0 00:07:52.935 SYMLINK libspdk_bdev_aio.so 00:07:52.935 CC module/bdev/raid/raid5f.o 00:07:53.194 LIB libspdk_bdev_virtio.a 00:07:53.194 SO libspdk_bdev_virtio.so.6.0 00:07:53.194 SYMLINK libspdk_bdev_virtio.so 00:07:53.761 LIB libspdk_bdev_raid.a 00:07:53.761 SO libspdk_bdev_raid.so.6.0 00:07:53.761 SYMLINK libspdk_bdev_raid.so 00:07:55.137 LIB libspdk_bdev_nvme.a 00:07:55.137 SO libspdk_bdev_nvme.so.7.1 00:07:55.137 SYMLINK libspdk_bdev_nvme.so 00:07:55.705 CC module/event/subsystems/fsdev/fsdev.o 00:07:55.705 CC module/event/subsystems/vmd/vmd.o 00:07:55.705 CC module/event/subsystems/iobuf/iobuf.o 00:07:55.705 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:55.705 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:55.705 CC module/event/subsystems/keyring/keyring.o 00:07:55.705 CC module/event/subsystems/sock/sock.o 00:07:55.705 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:55.705 CC module/event/subsystems/scheduler/scheduler.o 00:07:55.964 LIB libspdk_event_fsdev.a 00:07:55.964 LIB libspdk_event_keyring.a 00:07:55.964 LIB libspdk_event_vmd.a 00:07:55.964 LIB libspdk_event_vhost_blk.a 00:07:55.964 LIB libspdk_event_scheduler.a 00:07:55.964 LIB libspdk_event_iobuf.a 00:07:55.964 SO libspdk_event_fsdev.so.1.0 00:07:55.964 LIB libspdk_event_sock.a 00:07:55.964 SO libspdk_event_keyring.so.1.0 00:07:55.964 SO libspdk_event_vhost_blk.so.3.0 00:07:55.964 SO libspdk_event_vmd.so.6.0 00:07:55.964 SO libspdk_event_scheduler.so.4.0 00:07:55.964 SO libspdk_event_sock.so.5.0 00:07:55.964 SO libspdk_event_iobuf.so.3.0 00:07:55.964 SYMLINK libspdk_event_fsdev.so 00:07:55.964 SYMLINK libspdk_event_vhost_blk.so 00:07:55.964 SYMLINK libspdk_event_keyring.so 00:07:55.964 SYMLINK libspdk_event_vmd.so 00:07:55.964 SYMLINK libspdk_event_scheduler.so 00:07:55.964 SYMLINK libspdk_event_sock.so 00:07:55.964 SYMLINK libspdk_event_iobuf.so 00:07:56.222 CC module/event/subsystems/accel/accel.o 00:07:56.480 LIB libspdk_event_accel.a 00:07:56.481 SO libspdk_event_accel.so.6.0 00:07:56.481 SYMLINK libspdk_event_accel.so 00:07:56.740 CC module/event/subsystems/bdev/bdev.o 00:07:56.999 LIB libspdk_event_bdev.a 00:07:56.999 SO libspdk_event_bdev.so.6.0 00:07:57.257 SYMLINK libspdk_event_bdev.so 00:07:57.257 CC module/event/subsystems/nbd/nbd.o 00:07:57.257 CC module/event/subsystems/ublk/ublk.o 00:07:57.257 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:57.257 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:57.515 CC module/event/subsystems/scsi/scsi.o 00:07:57.515 LIB libspdk_event_ublk.a 00:07:57.515 LIB libspdk_event_nbd.a 00:07:57.515 SO libspdk_event_ublk.so.3.0 00:07:57.515 SO libspdk_event_nbd.so.6.0 00:07:57.515 LIB libspdk_event_scsi.a 00:07:57.515 SO libspdk_event_scsi.so.6.0 00:07:57.773 SYMLINK libspdk_event_ublk.so 00:07:57.773 SYMLINK libspdk_event_nbd.so 00:07:57.773 SYMLINK libspdk_event_scsi.so 00:07:57.773 LIB libspdk_event_nvmf.a 00:07:57.773 SO libspdk_event_nvmf.so.6.0 00:07:57.773 SYMLINK libspdk_event_nvmf.so 00:07:58.031 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:58.031 CC module/event/subsystems/iscsi/iscsi.o 00:07:58.031 LIB libspdk_event_vhost_scsi.a 00:07:58.031 SO libspdk_event_vhost_scsi.so.3.0 00:07:58.031 LIB libspdk_event_iscsi.a 00:07:58.290 SO libspdk_event_iscsi.so.6.0 00:07:58.290 SYMLINK libspdk_event_vhost_scsi.so 00:07:58.290 SYMLINK libspdk_event_iscsi.so 00:07:58.290 SO libspdk.so.6.0 00:07:58.548 SYMLINK libspdk.so 00:07:58.548 CXX app/trace/trace.o 00:07:58.548 CC app/trace_record/trace_record.o 00:07:58.548 CC app/spdk_lspci/spdk_lspci.o 00:07:58.548 CC app/spdk_nvme_identify/identify.o 00:07:58.548 CC app/spdk_nvme_perf/perf.o 00:07:58.807 CC app/nvmf_tgt/nvmf_main.o 00:07:58.807 CC app/spdk_tgt/spdk_tgt.o 00:07:58.807 CC app/iscsi_tgt/iscsi_tgt.o 00:07:58.807 CC test/thread/poller_perf/poller_perf.o 00:07:58.807 CC examples/util/zipf/zipf.o 00:07:58.807 LINK spdk_lspci 00:07:59.066 LINK nvmf_tgt 00:07:59.066 LINK poller_perf 00:07:59.066 LINK zipf 00:07:59.066 LINK spdk_trace_record 00:07:59.066 LINK spdk_tgt 00:07:59.066 LINK iscsi_tgt 00:07:59.066 CC app/spdk_nvme_discover/discovery_aer.o 00:07:59.066 LINK spdk_trace 00:07:59.357 CC app/spdk_top/spdk_top.o 00:07:59.358 LINK spdk_nvme_discover 00:07:59.358 CC examples/ioat/perf/perf.o 00:07:59.358 CC examples/vmd/lsvmd/lsvmd.o 00:07:59.358 CC test/dma/test_dma/test_dma.o 00:07:59.358 CC app/spdk_dd/spdk_dd.o 00:07:59.358 CC examples/ioat/verify/verify.o 00:07:59.628 CC examples/vmd/led/led.o 00:07:59.628 LINK lsvmd 00:07:59.628 LINK ioat_perf 00:07:59.628 LINK led 00:07:59.628 CC app/fio/nvme/fio_plugin.o 00:07:59.628 LINK verify 00:07:59.886 LINK spdk_nvme_identify 00:07:59.886 CC app/vhost/vhost.o 00:07:59.886 LINK spdk_nvme_perf 00:07:59.886 LINK spdk_dd 00:07:59.886 CC app/fio/bdev/fio_plugin.o 00:08:00.145 CC examples/idxd/perf/perf.o 00:08:00.145 CC test/app/bdev_svc/bdev_svc.o 00:08:00.145 LINK vhost 00:08:00.145 LINK test_dma 00:08:00.145 CC test/app/histogram_perf/histogram_perf.o 00:08:00.145 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:00.145 LINK bdev_svc 00:08:00.403 LINK histogram_perf 00:08:00.403 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:00.403 CC test/app/jsoncat/jsoncat.o 00:08:00.403 LINK spdk_nvme 00:08:00.403 LINK idxd_perf 00:08:00.403 LINK spdk_top 00:08:00.403 CC examples/thread/thread/thread_ex.o 00:08:00.662 LINK jsoncat 00:08:00.662 LINK interrupt_tgt 00:08:00.662 CC test/app/stub/stub.o 00:08:00.662 TEST_HEADER include/spdk/accel.h 00:08:00.662 TEST_HEADER include/spdk/accel_module.h 00:08:00.662 TEST_HEADER include/spdk/assert.h 00:08:00.662 TEST_HEADER include/spdk/barrier.h 00:08:00.662 TEST_HEADER include/spdk/base64.h 00:08:00.662 TEST_HEADER include/spdk/bdev.h 00:08:00.662 TEST_HEADER include/spdk/bdev_module.h 00:08:00.662 TEST_HEADER include/spdk/bdev_zone.h 00:08:00.662 TEST_HEADER include/spdk/bit_array.h 00:08:00.662 TEST_HEADER include/spdk/bit_pool.h 00:08:00.662 TEST_HEADER include/spdk/blob_bdev.h 00:08:00.662 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:00.662 TEST_HEADER include/spdk/blobfs.h 00:08:00.662 TEST_HEADER include/spdk/blob.h 00:08:00.662 TEST_HEADER include/spdk/conf.h 00:08:00.662 TEST_HEADER include/spdk/config.h 00:08:00.662 TEST_HEADER include/spdk/cpuset.h 00:08:00.662 TEST_HEADER include/spdk/crc16.h 00:08:00.662 TEST_HEADER include/spdk/crc32.h 00:08:00.662 TEST_HEADER include/spdk/crc64.h 00:08:00.662 TEST_HEADER include/spdk/dif.h 00:08:00.662 TEST_HEADER include/spdk/dma.h 00:08:00.662 TEST_HEADER include/spdk/endian.h 00:08:00.662 TEST_HEADER include/spdk/env_dpdk.h 00:08:00.662 TEST_HEADER include/spdk/env.h 00:08:00.662 TEST_HEADER include/spdk/event.h 00:08:00.662 TEST_HEADER include/spdk/fd_group.h 00:08:00.662 TEST_HEADER include/spdk/fd.h 00:08:00.662 TEST_HEADER include/spdk/file.h 00:08:00.662 LINK spdk_bdev 00:08:00.662 TEST_HEADER include/spdk/fsdev.h 00:08:00.662 LINK nvme_fuzz 00:08:00.662 CC examples/sock/hello_world/hello_sock.o 00:08:00.662 TEST_HEADER include/spdk/fsdev_module.h 00:08:00.662 TEST_HEADER include/spdk/ftl.h 00:08:00.662 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:00.662 TEST_HEADER include/spdk/gpt_spec.h 00:08:00.662 TEST_HEADER include/spdk/hexlify.h 00:08:00.662 TEST_HEADER include/spdk/histogram_data.h 00:08:00.662 TEST_HEADER include/spdk/idxd.h 00:08:00.662 TEST_HEADER include/spdk/idxd_spec.h 00:08:00.662 TEST_HEADER include/spdk/init.h 00:08:00.662 TEST_HEADER include/spdk/ioat.h 00:08:00.662 TEST_HEADER include/spdk/ioat_spec.h 00:08:00.662 TEST_HEADER include/spdk/iscsi_spec.h 00:08:00.662 TEST_HEADER include/spdk/json.h 00:08:00.662 TEST_HEADER include/spdk/jsonrpc.h 00:08:00.662 TEST_HEADER include/spdk/keyring.h 00:08:00.662 TEST_HEADER include/spdk/keyring_module.h 00:08:00.662 TEST_HEADER include/spdk/likely.h 00:08:00.662 TEST_HEADER include/spdk/log.h 00:08:00.662 TEST_HEADER include/spdk/lvol.h 00:08:00.662 TEST_HEADER include/spdk/md5.h 00:08:00.662 TEST_HEADER include/spdk/memory.h 00:08:00.662 TEST_HEADER include/spdk/mmio.h 00:08:00.662 TEST_HEADER include/spdk/nbd.h 00:08:00.662 TEST_HEADER include/spdk/net.h 00:08:00.662 TEST_HEADER include/spdk/notify.h 00:08:00.662 TEST_HEADER include/spdk/nvme.h 00:08:00.662 TEST_HEADER include/spdk/nvme_intel.h 00:08:00.662 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:00.662 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:00.662 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:00.662 TEST_HEADER include/spdk/nvme_spec.h 00:08:00.662 TEST_HEADER include/spdk/nvme_zns.h 00:08:00.662 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:00.662 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:00.662 TEST_HEADER include/spdk/nvmf.h 00:08:00.662 TEST_HEADER include/spdk/nvmf_spec.h 00:08:00.662 TEST_HEADER include/spdk/nvmf_transport.h 00:08:00.662 TEST_HEADER include/spdk/opal.h 00:08:00.662 TEST_HEADER include/spdk/opal_spec.h 00:08:00.662 TEST_HEADER include/spdk/pci_ids.h 00:08:00.662 TEST_HEADER include/spdk/pipe.h 00:08:00.662 TEST_HEADER include/spdk/queue.h 00:08:00.662 TEST_HEADER include/spdk/reduce.h 00:08:00.662 TEST_HEADER include/spdk/rpc.h 00:08:00.662 TEST_HEADER include/spdk/scheduler.h 00:08:00.662 TEST_HEADER include/spdk/scsi.h 00:08:00.662 TEST_HEADER include/spdk/scsi_spec.h 00:08:00.662 TEST_HEADER include/spdk/sock.h 00:08:00.662 TEST_HEADER include/spdk/stdinc.h 00:08:00.662 TEST_HEADER include/spdk/string.h 00:08:00.662 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:00.662 TEST_HEADER include/spdk/thread.h 00:08:00.662 TEST_HEADER include/spdk/trace.h 00:08:00.662 TEST_HEADER include/spdk/trace_parser.h 00:08:00.920 TEST_HEADER include/spdk/tree.h 00:08:00.920 LINK stub 00:08:00.920 TEST_HEADER include/spdk/ublk.h 00:08:00.920 TEST_HEADER include/spdk/util.h 00:08:00.920 TEST_HEADER include/spdk/uuid.h 00:08:00.920 TEST_HEADER include/spdk/version.h 00:08:00.920 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:00.920 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:00.920 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:00.920 TEST_HEADER include/spdk/vhost.h 00:08:00.920 TEST_HEADER include/spdk/vmd.h 00:08:00.920 TEST_HEADER include/spdk/xor.h 00:08:00.920 TEST_HEADER include/spdk/zipf.h 00:08:00.920 CXX test/cpp_headers/accel.o 00:08:00.920 LINK thread 00:08:00.920 CC test/env/vtophys/vtophys.o 00:08:00.920 CC test/event/event_perf/event_perf.o 00:08:00.920 LINK hello_sock 00:08:00.920 CC test/env/mem_callbacks/mem_callbacks.o 00:08:00.920 CXX test/cpp_headers/accel_module.o 00:08:01.179 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:01.179 CC test/env/memory/memory_ut.o 00:08:01.179 LINK vtophys 00:08:01.179 CC test/env/pci/pci_ut.o 00:08:01.179 LINK event_perf 00:08:01.179 CXX test/cpp_headers/assert.o 00:08:01.438 LINK vhost_fuzz 00:08:01.438 CXX test/cpp_headers/barrier.o 00:08:01.438 LINK env_dpdk_post_init 00:08:01.438 CC test/event/reactor/reactor.o 00:08:01.438 CC examples/accel/perf/accel_perf.o 00:08:01.438 CC test/event/reactor_perf/reactor_perf.o 00:08:01.438 CC test/event/app_repeat/app_repeat.o 00:08:01.697 CXX test/cpp_headers/base64.o 00:08:01.697 LINK reactor 00:08:01.697 CC test/event/scheduler/scheduler.o 00:08:01.697 LINK pci_ut 00:08:01.697 LINK reactor_perf 00:08:01.697 LINK mem_callbacks 00:08:01.697 LINK app_repeat 00:08:01.697 CXX test/cpp_headers/bdev.o 00:08:01.697 CXX test/cpp_headers/bdev_module.o 00:08:01.955 LINK scheduler 00:08:01.955 CC examples/blob/hello_world/hello_blob.o 00:08:01.955 CC examples/blob/cli/blobcli.o 00:08:01.955 CXX test/cpp_headers/bdev_zone.o 00:08:01.955 LINK accel_perf 00:08:02.213 CC examples/nvme/hello_world/hello_world.o 00:08:02.213 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:02.213 CC test/nvme/aer/aer.o 00:08:02.213 CC test/nvme/reset/reset.o 00:08:02.213 LINK hello_blob 00:08:02.213 CXX test/cpp_headers/bit_array.o 00:08:02.213 CC examples/nvme/reconnect/reconnect.o 00:08:02.471 LINK hello_world 00:08:02.471 CXX test/cpp_headers/bit_pool.o 00:08:02.471 LINK hello_fsdev 00:08:02.471 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:02.471 LINK aer 00:08:02.471 LINK reset 00:08:02.471 LINK memory_ut 00:08:02.729 CXX test/cpp_headers/blob_bdev.o 00:08:02.729 LINK blobcli 00:08:02.729 CC examples/nvme/arbitration/arbitration.o 00:08:02.729 CXX test/cpp_headers/blobfs_bdev.o 00:08:02.729 CXX test/cpp_headers/blobfs.o 00:08:02.729 CXX test/cpp_headers/blob.o 00:08:02.729 LINK reconnect 00:08:02.729 CC test/nvme/sgl/sgl.o 00:08:02.729 CXX test/cpp_headers/conf.o 00:08:02.988 CXX test/cpp_headers/config.o 00:08:02.988 CC test/rpc_client/rpc_client_test.o 00:08:02.988 CXX test/cpp_headers/cpuset.o 00:08:02.988 CC examples/nvme/hotplug/hotplug.o 00:08:02.988 LINK iscsi_fuzz 00:08:02.988 CC test/nvme/e2edp/nvme_dp.o 00:08:02.988 LINK arbitration 00:08:03.246 CC test/accel/dif/dif.o 00:08:03.246 CC test/blobfs/mkfs/mkfs.o 00:08:03.246 LINK nvme_manage 00:08:03.246 LINK rpc_client_test 00:08:03.246 LINK sgl 00:08:03.246 CXX test/cpp_headers/crc16.o 00:08:03.246 LINK hotplug 00:08:03.246 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:03.503 CXX test/cpp_headers/crc32.o 00:08:03.503 LINK mkfs 00:08:03.503 CC examples/nvme/abort/abort.o 00:08:03.503 LINK nvme_dp 00:08:03.503 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:03.503 CC test/nvme/overhead/overhead.o 00:08:03.503 CXX test/cpp_headers/crc64.o 00:08:03.503 CXX test/cpp_headers/dif.o 00:08:03.503 LINK cmb_copy 00:08:03.761 CC examples/bdev/hello_world/hello_bdev.o 00:08:03.761 LINK pmr_persistence 00:08:03.761 CC test/nvme/startup/startup.o 00:08:03.761 CC test/nvme/err_injection/err_injection.o 00:08:03.761 CXX test/cpp_headers/dma.o 00:08:03.761 CXX test/cpp_headers/endian.o 00:08:03.761 LINK overhead 00:08:03.761 LINK abort 00:08:03.761 CC examples/bdev/bdevperf/bdevperf.o 00:08:04.020 LINK startup 00:08:04.020 LINK err_injection 00:08:04.020 CXX test/cpp_headers/env_dpdk.o 00:08:04.020 CXX test/cpp_headers/env.o 00:08:04.020 LINK hello_bdev 00:08:04.020 LINK dif 00:08:04.020 CXX test/cpp_headers/event.o 00:08:04.020 CXX test/cpp_headers/fd_group.o 00:08:04.278 CC test/nvme/reserve/reserve.o 00:08:04.278 CC test/nvme/simple_copy/simple_copy.o 00:08:04.278 CXX test/cpp_headers/fd.o 00:08:04.278 CC test/nvme/connect_stress/connect_stress.o 00:08:04.278 CC test/nvme/boot_partition/boot_partition.o 00:08:04.278 CC test/nvme/compliance/nvme_compliance.o 00:08:04.278 CC test/lvol/esnap/esnap.o 00:08:04.278 CC test/nvme/fused_ordering/fused_ordering.o 00:08:04.538 LINK reserve 00:08:04.538 CXX test/cpp_headers/file.o 00:08:04.538 LINK boot_partition 00:08:04.538 LINK connect_stress 00:08:04.538 CC test/bdev/bdevio/bdevio.o 00:08:04.538 LINK fused_ordering 00:08:04.538 LINK simple_copy 00:08:04.538 CXX test/cpp_headers/fsdev.o 00:08:04.885 CXX test/cpp_headers/fsdev_module.o 00:08:04.885 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:04.886 LINK nvme_compliance 00:08:04.886 CXX test/cpp_headers/ftl.o 00:08:04.886 CC test/nvme/fdp/fdp.o 00:08:04.886 CXX test/cpp_headers/fuse_dispatcher.o 00:08:04.886 CC test/nvme/cuse/cuse.o 00:08:04.886 CXX test/cpp_headers/gpt_spec.o 00:08:04.886 CXX test/cpp_headers/hexlify.o 00:08:04.886 LINK doorbell_aers 00:08:04.886 CXX test/cpp_headers/histogram_data.o 00:08:04.886 CXX test/cpp_headers/idxd.o 00:08:05.143 LINK bdevperf 00:08:05.143 LINK bdevio 00:08:05.143 CXX test/cpp_headers/idxd_spec.o 00:08:05.143 CXX test/cpp_headers/init.o 00:08:05.143 CXX test/cpp_headers/ioat.o 00:08:05.143 CXX test/cpp_headers/ioat_spec.o 00:08:05.143 CXX test/cpp_headers/iscsi_spec.o 00:08:05.143 LINK fdp 00:08:05.143 CXX test/cpp_headers/json.o 00:08:05.143 CXX test/cpp_headers/jsonrpc.o 00:08:05.400 CXX test/cpp_headers/keyring.o 00:08:05.400 CXX test/cpp_headers/keyring_module.o 00:08:05.401 CXX test/cpp_headers/likely.o 00:08:05.401 CXX test/cpp_headers/log.o 00:08:05.401 CXX test/cpp_headers/lvol.o 00:08:05.401 CXX test/cpp_headers/md5.o 00:08:05.401 CXX test/cpp_headers/memory.o 00:08:05.401 CXX test/cpp_headers/mmio.o 00:08:05.401 CXX test/cpp_headers/nbd.o 00:08:05.401 CC examples/nvmf/nvmf/nvmf.o 00:08:05.401 CXX test/cpp_headers/net.o 00:08:05.659 CXX test/cpp_headers/notify.o 00:08:05.659 CXX test/cpp_headers/nvme.o 00:08:05.659 CXX test/cpp_headers/nvme_intel.o 00:08:05.659 CXX test/cpp_headers/nvme_ocssd.o 00:08:05.659 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:05.659 CXX test/cpp_headers/nvme_spec.o 00:08:05.659 CXX test/cpp_headers/nvme_zns.o 00:08:05.659 CXX test/cpp_headers/nvmf_cmd.o 00:08:05.916 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:05.916 CXX test/cpp_headers/nvmf.o 00:08:05.916 CXX test/cpp_headers/nvmf_spec.o 00:08:05.916 CXX test/cpp_headers/nvmf_transport.o 00:08:05.916 LINK nvmf 00:08:05.916 CXX test/cpp_headers/opal.o 00:08:05.916 CXX test/cpp_headers/opal_spec.o 00:08:05.916 CXX test/cpp_headers/pci_ids.o 00:08:05.916 CXX test/cpp_headers/pipe.o 00:08:05.916 CXX test/cpp_headers/queue.o 00:08:05.916 CXX test/cpp_headers/reduce.o 00:08:06.174 CXX test/cpp_headers/rpc.o 00:08:06.174 CXX test/cpp_headers/scheduler.o 00:08:06.174 CXX test/cpp_headers/scsi.o 00:08:06.174 CXX test/cpp_headers/scsi_spec.o 00:08:06.174 CXX test/cpp_headers/sock.o 00:08:06.174 CXX test/cpp_headers/stdinc.o 00:08:06.174 CXX test/cpp_headers/string.o 00:08:06.174 CXX test/cpp_headers/thread.o 00:08:06.174 CXX test/cpp_headers/trace.o 00:08:06.174 CXX test/cpp_headers/trace_parser.o 00:08:06.174 CXX test/cpp_headers/tree.o 00:08:06.479 CXX test/cpp_headers/ublk.o 00:08:06.479 CXX test/cpp_headers/util.o 00:08:06.479 CXX test/cpp_headers/uuid.o 00:08:06.479 CXX test/cpp_headers/version.o 00:08:06.479 CXX test/cpp_headers/vfio_user_pci.o 00:08:06.479 CXX test/cpp_headers/vfio_user_spec.o 00:08:06.479 CXX test/cpp_headers/vhost.o 00:08:06.479 CXX test/cpp_headers/vmd.o 00:08:06.479 LINK cuse 00:08:06.479 CXX test/cpp_headers/xor.o 00:08:06.479 CXX test/cpp_headers/zipf.o 00:08:11.741 LINK esnap 00:08:12.307 00:08:12.307 real 1m53.322s 00:08:12.307 user 10m35.185s 00:08:12.307 sys 1m58.298s 00:08:12.307 ************************************ 00:08:12.307 END TEST make 00:08:12.307 ************************************ 00:08:12.307 04:29:59 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:12.307 04:29:59 make -- common/autotest_common.sh@10 -- $ set +x 00:08:12.307 04:29:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:12.307 04:29:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:12.307 04:29:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:12.307 04:29:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.307 04:29:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:12.307 04:29:59 -- pm/common@44 -- $ pid=5302 00:08:12.307 04:29:59 -- pm/common@50 -- $ kill -TERM 5302 00:08:12.307 04:29:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.307 04:29:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:12.307 04:29:59 -- pm/common@44 -- $ pid=5304 00:08:12.307 04:29:59 -- pm/common@50 -- $ kill -TERM 5304 00:08:12.307 04:29:59 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:12.307 04:29:59 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:12.307 04:29:59 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.307 04:29:59 -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.307 04:29:59 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.307 04:29:59 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.307 04:29:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.307 04:29:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.307 04:29:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.307 04:29:59 -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.307 04:29:59 -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.307 04:29:59 -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.307 04:29:59 -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.307 04:29:59 -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.307 04:29:59 -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.307 04:29:59 -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.307 04:29:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.307 04:29:59 -- scripts/common.sh@344 -- # case "$op" in 00:08:12.307 04:29:59 -- scripts/common.sh@345 -- # : 1 00:08:12.307 04:29:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.307 04:29:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.307 04:29:59 -- scripts/common.sh@365 -- # decimal 1 00:08:12.307 04:29:59 -- scripts/common.sh@353 -- # local d=1 00:08:12.307 04:29:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.307 04:29:59 -- scripts/common.sh@355 -- # echo 1 00:08:12.307 04:29:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.307 04:29:59 -- scripts/common.sh@366 -- # decimal 2 00:08:12.307 04:29:59 -- scripts/common.sh@353 -- # local d=2 00:08:12.307 04:29:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.307 04:29:59 -- scripts/common.sh@355 -- # echo 2 00:08:12.307 04:29:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.307 04:29:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.307 04:29:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.307 04:29:59 -- scripts/common.sh@368 -- # return 0 00:08:12.307 04:29:59 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.307 04:29:59 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.307 --rc genhtml_branch_coverage=1 00:08:12.307 --rc genhtml_function_coverage=1 00:08:12.307 --rc genhtml_legend=1 00:08:12.307 --rc geninfo_all_blocks=1 00:08:12.307 --rc geninfo_unexecuted_blocks=1 00:08:12.307 00:08:12.307 ' 00:08:12.307 04:29:59 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.307 --rc genhtml_branch_coverage=1 00:08:12.307 --rc genhtml_function_coverage=1 00:08:12.307 --rc genhtml_legend=1 00:08:12.307 --rc geninfo_all_blocks=1 00:08:12.307 --rc geninfo_unexecuted_blocks=1 00:08:12.307 00:08:12.307 ' 00:08:12.307 04:29:59 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.307 --rc genhtml_branch_coverage=1 00:08:12.307 --rc genhtml_function_coverage=1 00:08:12.307 --rc genhtml_legend=1 00:08:12.307 --rc geninfo_all_blocks=1 00:08:12.307 --rc geninfo_unexecuted_blocks=1 00:08:12.307 00:08:12.307 ' 00:08:12.307 04:29:59 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.307 --rc genhtml_branch_coverage=1 00:08:12.307 --rc genhtml_function_coverage=1 00:08:12.307 --rc genhtml_legend=1 00:08:12.307 --rc geninfo_all_blocks=1 00:08:12.307 --rc geninfo_unexecuted_blocks=1 00:08:12.307 00:08:12.307 ' 00:08:12.307 04:29:59 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.307 04:29:59 -- nvmf/common.sh@7 -- # uname -s 00:08:12.307 04:29:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.307 04:29:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.307 04:29:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.307 04:29:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.307 04:29:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.307 04:29:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.307 04:29:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.307 04:29:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.307 04:29:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.307 04:29:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.566 04:29:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b590e78-c7c2-47b8-8d4e-2e32c1bfd2eb 00:08:12.566 04:29:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=7b590e78-c7c2-47b8-8d4e-2e32c1bfd2eb 00:08:12.566 04:29:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.566 04:29:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.566 04:29:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:12.566 04:29:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.566 04:29:59 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.566 04:29:59 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.566 04:29:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.566 04:29:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.566 04:29:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.566 04:29:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.566 04:29:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.566 04:29:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.566 04:29:59 -- paths/export.sh@5 -- # export PATH 00:08:12.566 04:29:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.566 04:29:59 -- nvmf/common.sh@51 -- # : 0 00:08:12.566 04:29:59 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.566 04:29:59 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.566 04:29:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.566 04:29:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.566 04:29:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.566 04:29:59 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.566 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.566 04:29:59 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.566 04:29:59 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.566 04:29:59 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.566 04:29:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:12.566 04:29:59 -- spdk/autotest.sh@32 -- # uname -s 00:08:12.566 04:29:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:12.566 04:29:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:12.566 04:29:59 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:12.566 04:29:59 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:12.566 04:29:59 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:12.566 04:29:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:12.566 04:30:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:12.566 04:30:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:12.566 04:30:00 -- spdk/autotest.sh@48 -- # udevadm_pid=54517 00:08:12.566 04:30:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:12.566 04:30:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:12.566 04:30:00 -- pm/common@17 -- # local monitor 00:08:12.566 04:30:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.566 04:30:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.566 04:30:00 -- pm/common@25 -- # sleep 1 00:08:12.566 04:30:00 -- pm/common@21 -- # date +%s 00:08:12.566 04:30:00 -- pm/common@21 -- # date +%s 00:08:12.566 04:30:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732681800 00:08:12.566 04:30:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732681800 00:08:12.566 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732681800_collect-cpu-load.pm.log 00:08:12.566 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732681800_collect-vmstat.pm.log 00:08:13.500 04:30:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:13.500 04:30:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:13.500 04:30:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:13.500 04:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:13.500 04:30:01 -- spdk/autotest.sh@59 -- # create_test_list 00:08:13.500 04:30:01 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:13.500 04:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:13.500 04:30:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:13.500 04:30:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:13.500 04:30:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:13.500 04:30:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:13.500 04:30:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:13.500 04:30:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:13.500 04:30:01 -- common/autotest_common.sh@1457 -- # uname 00:08:13.500 04:30:01 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:13.500 04:30:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:13.500 04:30:01 -- common/autotest_common.sh@1477 -- # uname 00:08:13.500 04:30:01 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:13.500 04:30:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:13.500 04:30:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:13.758 lcov: LCOV version 1.15 00:08:13.758 04:30:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:31.867 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:31.867 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:49.996 04:30:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:49.996 04:30:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.996 04:30:34 -- common/autotest_common.sh@10 -- # set +x 00:08:49.996 04:30:34 -- spdk/autotest.sh@78 -- # rm -f 00:08:49.996 04:30:34 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:49.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.996 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:49.996 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:49.996 04:30:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:49.996 04:30:35 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:49.996 04:30:35 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:49.996 04:30:35 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:49.996 04:30:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:49.996 04:30:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:49.996 04:30:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:49.996 04:30:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:49.996 04:30:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:49.996 04:30:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:49.997 04:30:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:49.997 04:30:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:49.997 04:30:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:49.997 04:30:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:49.997 04:30:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:49.997 04:30:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:08:49.997 04:30:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:08:49.997 04:30:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:49.997 04:30:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:49.997 04:30:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:49.997 04:30:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:08:49.997 04:30:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:08:49.997 04:30:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:49.997 04:30:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:49.997 04:30:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:49.997 04:30:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:49.997 04:30:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:49.997 04:30:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:49.997 04:30:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:49.997 04:30:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:49.997 No valid GPT data, bailing 00:08:49.997 04:30:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:49.997 04:30:35 -- scripts/common.sh@394 -- # pt= 00:08:49.997 04:30:35 -- scripts/common.sh@395 -- # return 1 00:08:49.997 04:30:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:49.997 1+0 records in 00:08:49.997 1+0 records out 00:08:49.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360756 s, 291 MB/s 00:08:49.997 04:30:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:49.997 04:30:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:49.997 04:30:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:49.997 04:30:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:49.997 04:30:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:49.997 No valid GPT data, bailing 00:08:49.997 04:30:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:49.997 04:30:35 -- scripts/common.sh@394 -- # pt= 00:08:49.997 04:30:35 -- scripts/common.sh@395 -- # return 1 00:08:49.997 04:30:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:49.997 1+0 records in 00:08:49.997 1+0 records out 00:08:49.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422818 s, 248 MB/s 00:08:49.997 04:30:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:49.997 04:30:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:49.997 04:30:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:49.997 04:30:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:49.997 04:30:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:49.997 No valid GPT data, bailing 00:08:49.997 04:30:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:49.997 04:30:35 -- scripts/common.sh@394 -- # pt= 00:08:49.997 04:30:35 -- scripts/common.sh@395 -- # return 1 00:08:49.997 04:30:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:49.997 1+0 records in 00:08:49.997 1+0 records out 00:08:49.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00372921 s, 281 MB/s 00:08:49.997 04:30:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:49.997 04:30:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:49.997 04:30:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:49.997 04:30:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:49.997 04:30:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:49.997 No valid GPT data, bailing 00:08:49.997 04:30:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:49.997 04:30:35 -- scripts/common.sh@394 -- # pt= 00:08:49.997 04:30:35 -- scripts/common.sh@395 -- # return 1 00:08:49.997 04:30:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:49.997 1+0 records in 00:08:49.997 1+0 records out 00:08:49.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460531 s, 228 MB/s 00:08:49.997 04:30:35 -- spdk/autotest.sh@105 -- # sync 00:08:49.997 04:30:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:49.997 04:30:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:49.997 04:30:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:50.255 04:30:37 -- spdk/autotest.sh@111 -- # uname -s 00:08:50.255 04:30:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:50.255 04:30:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:50.255 04:30:37 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:51.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:51.190 Hugepages 00:08:51.190 node hugesize free / total 00:08:51.190 node0 1048576kB 0 / 0 00:08:51.190 node0 2048kB 0 / 0 00:08:51.190 00:08:51.190 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:51.190 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:51.190 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:51.190 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:51.190 04:30:38 -- spdk/autotest.sh@117 -- # uname -s 00:08:51.190 04:30:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:51.190 04:30:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:51.190 04:30:38 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:51.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:52.029 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.029 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.029 04:30:39 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:53.405 04:30:40 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:53.405 04:30:40 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:53.405 04:30:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:53.405 04:30:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:53.405 04:30:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:53.405 04:30:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:53.405 04:30:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:53.405 04:30:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:53.405 04:30:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:53.405 04:30:40 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:53.405 04:30:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:53.405 04:30:40 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:53.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:53.664 Waiting for block devices as requested 00:08:53.664 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.664 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.664 04:30:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:53.664 04:30:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:53.664 04:30:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:53.664 04:30:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:53.664 04:30:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:53.664 04:30:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:53.664 04:30:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:53.664 04:30:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:53.664 04:30:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:53.664 04:30:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:53.664 04:30:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:53.664 04:30:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:53.664 04:30:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:53.664 04:30:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:53.664 04:30:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:53.923 04:30:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:53.923 04:30:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:53.923 04:30:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:53.923 04:30:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:53.923 04:30:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:53.923 04:30:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:53.923 04:30:41 -- common/autotest_common.sh@1543 -- # continue 00:08:53.923 04:30:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:53.923 04:30:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:53.923 04:30:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:53.923 04:30:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:53.923 04:30:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:53.923 04:30:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:53.923 04:30:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:53.923 04:30:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:53.923 04:30:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:53.923 04:30:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:53.923 04:30:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:53.923 04:30:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:53.923 04:30:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:53.923 04:30:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:53.923 04:30:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:53.923 04:30:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:53.923 04:30:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:53.923 04:30:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:53.923 04:30:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:53.923 04:30:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:53.923 04:30:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:53.923 04:30:41 -- common/autotest_common.sh@1543 -- # continue 00:08:53.923 04:30:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:53.923 04:30:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.923 04:30:41 -- common/autotest_common.sh@10 -- # set +x 00:08:53.923 04:30:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:53.923 04:30:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.923 04:30:41 -- common/autotest_common.sh@10 -- # set +x 00:08:53.923 04:30:41 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:54.491 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.750 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.750 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.750 04:30:42 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:54.750 04:30:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.750 04:30:42 -- common/autotest_common.sh@10 -- # set +x 00:08:54.750 04:30:42 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:54.750 04:30:42 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:54.750 04:30:42 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:54.750 04:30:42 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:54.750 04:30:42 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:54.750 04:30:42 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:54.750 04:30:42 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:54.750 04:30:42 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:54.750 04:30:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:54.750 04:30:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:54.750 04:30:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:54.750 04:30:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:54.750 04:30:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:54.750 04:30:42 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:54.750 04:30:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:54.750 04:30:42 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:54.750 04:30:42 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:54.750 04:30:42 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:54.750 04:30:42 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:54.750 04:30:42 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:54.750 04:30:42 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:54.750 04:30:42 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:54.750 04:30:42 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:54.750 04:30:42 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:54.750 04:30:42 -- common/autotest_common.sh@1572 -- # return 0 00:08:54.750 04:30:42 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:54.750 04:30:42 -- common/autotest_common.sh@1580 -- # return 0 00:08:54.750 04:30:42 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:54.750 04:30:42 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:54.750 04:30:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:54.750 04:30:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:54.750 04:30:42 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:54.750 04:30:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.750 04:30:42 -- common/autotest_common.sh@10 -- # set +x 00:08:55.009 04:30:42 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:55.009 04:30:42 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:55.009 04:30:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.009 04:30:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.009 04:30:42 -- common/autotest_common.sh@10 -- # set +x 00:08:55.009 ************************************ 00:08:55.009 START TEST env 00:08:55.009 ************************************ 00:08:55.009 04:30:42 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:55.009 * Looking for test storage... 00:08:55.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:55.009 04:30:42 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.009 04:30:42 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.009 04:30:42 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.009 04:30:42 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.009 04:30:42 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.009 04:30:42 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.009 04:30:42 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.009 04:30:42 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.009 04:30:42 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.009 04:30:42 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.009 04:30:42 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.009 04:30:42 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.009 04:30:42 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.009 04:30:42 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.009 04:30:42 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.009 04:30:42 env -- scripts/common.sh@344 -- # case "$op" in 00:08:55.009 04:30:42 env -- scripts/common.sh@345 -- # : 1 00:08:55.009 04:30:42 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.009 04:30:42 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.009 04:30:42 env -- scripts/common.sh@365 -- # decimal 1 00:08:55.009 04:30:42 env -- scripts/common.sh@353 -- # local d=1 00:08:55.009 04:30:42 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.009 04:30:42 env -- scripts/common.sh@355 -- # echo 1 00:08:55.009 04:30:42 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.009 04:30:42 env -- scripts/common.sh@366 -- # decimal 2 00:08:55.009 04:30:42 env -- scripts/common.sh@353 -- # local d=2 00:08:55.009 04:30:42 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.009 04:30:42 env -- scripts/common.sh@355 -- # echo 2 00:08:55.009 04:30:42 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.009 04:30:42 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.009 04:30:42 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.009 04:30:42 env -- scripts/common.sh@368 -- # return 0 00:08:55.009 04:30:42 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.009 04:30:42 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.009 --rc genhtml_branch_coverage=1 00:08:55.009 --rc genhtml_function_coverage=1 00:08:55.009 --rc genhtml_legend=1 00:08:55.009 --rc geninfo_all_blocks=1 00:08:55.009 --rc geninfo_unexecuted_blocks=1 00:08:55.009 00:08:55.010 ' 00:08:55.010 04:30:42 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.010 --rc genhtml_branch_coverage=1 00:08:55.010 --rc genhtml_function_coverage=1 00:08:55.010 --rc genhtml_legend=1 00:08:55.010 --rc geninfo_all_blocks=1 00:08:55.010 --rc geninfo_unexecuted_blocks=1 00:08:55.010 00:08:55.010 ' 00:08:55.010 04:30:42 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.010 --rc genhtml_branch_coverage=1 00:08:55.010 --rc genhtml_function_coverage=1 00:08:55.010 --rc genhtml_legend=1 00:08:55.010 --rc geninfo_all_blocks=1 00:08:55.010 --rc geninfo_unexecuted_blocks=1 00:08:55.010 00:08:55.010 ' 00:08:55.010 04:30:42 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.010 --rc genhtml_branch_coverage=1 00:08:55.010 --rc genhtml_function_coverage=1 00:08:55.010 --rc genhtml_legend=1 00:08:55.010 --rc geninfo_all_blocks=1 00:08:55.010 --rc geninfo_unexecuted_blocks=1 00:08:55.010 00:08:55.010 ' 00:08:55.010 04:30:42 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:55.010 04:30:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.010 04:30:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.010 04:30:42 env -- common/autotest_common.sh@10 -- # set +x 00:08:55.010 ************************************ 00:08:55.010 START TEST env_memory 00:08:55.010 ************************************ 00:08:55.010 04:30:42 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:55.010 00:08:55.010 00:08:55.010 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.010 http://cunit.sourceforge.net/ 00:08:55.010 00:08:55.010 00:08:55.010 Suite: memory 00:08:55.268 Test: alloc and free memory map ...[2024-11-27 04:30:42.659963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:55.268 passed 00:08:55.268 Test: mem map translation ...[2024-11-27 04:30:42.720218] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:55.268 [2024-11-27 04:30:42.720486] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:55.268 [2024-11-27 04:30:42.720735] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:55.268 [2024-11-27 04:30:42.721005] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:55.268 passed 00:08:55.268 Test: mem map registration ...[2024-11-27 04:30:42.818599] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:55.268 [2024-11-27 04:30:42.818866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:55.268 passed 00:08:55.527 Test: mem map adjacent registrations ...passed 00:08:55.527 00:08:55.527 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.527 suites 1 1 n/a 0 0 00:08:55.527 tests 4 4 4 0 0 00:08:55.527 asserts 152 152 152 0 n/a 00:08:55.527 00:08:55.527 Elapsed time = 0.339 seconds 00:08:55.527 ************************************ 00:08:55.527 END TEST env_memory 00:08:55.527 ************************************ 00:08:55.527 00:08:55.527 real 0m0.384s 00:08:55.527 user 0m0.343s 00:08:55.527 sys 0m0.031s 00:08:55.527 04:30:42 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.527 04:30:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:55.527 04:30:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:55.527 04:30:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.527 04:30:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.527 04:30:43 env -- common/autotest_common.sh@10 -- # set +x 00:08:55.527 ************************************ 00:08:55.527 START TEST env_vtophys 00:08:55.527 ************************************ 00:08:55.527 04:30:43 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:55.528 EAL: lib.eal log level changed from notice to debug 00:08:55.528 EAL: Detected lcore 0 as core 0 on socket 0 00:08:55.528 EAL: Detected lcore 1 as core 0 on socket 0 00:08:55.528 EAL: Detected lcore 2 as core 0 on socket 0 00:08:55.528 EAL: Detected lcore 3 as core 0 on socket 0 00:08:55.528 EAL: Detected lcore 4 as core 0 on socket 0 00:08:55.528 EAL: Detected lcore 5 as core 0 on socket 0 00:08:55.528 EAL: Detected lcore 6 as core 0 on socket 0 00:08:55.528 EAL: Detected lcore 7 as core 0 on socket 0 00:08:55.528 EAL: Detected lcore 8 as core 0 on socket 0 00:08:55.528 EAL: Detected lcore 9 as core 0 on socket 0 00:08:55.528 EAL: Maximum logical cores by configuration: 128 00:08:55.528 EAL: Detected CPU lcores: 10 00:08:55.528 EAL: Detected NUMA nodes: 1 00:08:55.528 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:55.528 EAL: Detected shared linkage of DPDK 00:08:55.790 EAL: No shared files mode enabled, IPC will be disabled 00:08:55.790 EAL: Selected IOVA mode 'PA' 00:08:55.790 EAL: Probing VFIO support... 00:08:55.790 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:55.790 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:55.790 EAL: Ask a virtual area of 0x2e000 bytes 00:08:55.790 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:55.790 EAL: Setting up physically contiguous memory... 00:08:55.790 EAL: Setting maximum number of open files to 524288 00:08:55.790 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:55.790 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:55.790 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.790 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:55.790 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:55.790 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.790 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:55.790 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:55.790 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.790 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:55.790 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:55.790 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.790 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:55.791 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:55.791 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.791 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:55.791 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:55.791 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.791 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:55.791 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:55.791 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.791 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:55.791 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:55.791 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.791 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:55.791 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:55.791 EAL: Hugepages will be freed exactly as allocated. 00:08:55.791 EAL: No shared files mode enabled, IPC is disabled 00:08:55.791 EAL: No shared files mode enabled, IPC is disabled 00:08:55.791 EAL: TSC frequency is ~2200000 KHz 00:08:55.791 EAL: Main lcore 0 is ready (tid=7fc9372c4a40;cpuset=[0]) 00:08:55.791 EAL: Trying to obtain current memory policy. 00:08:55.791 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.791 EAL: Restoring previous memory policy: 0 00:08:55.791 EAL: request: mp_malloc_sync 00:08:55.791 EAL: No shared files mode enabled, IPC is disabled 00:08:55.791 EAL: Heap on socket 0 was expanded by 2MB 00:08:55.791 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:55.791 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:55.791 EAL: Mem event callback 'spdk:(nil)' registered 00:08:55.791 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:55.791 00:08:55.791 00:08:55.791 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.791 http://cunit.sourceforge.net/ 00:08:55.791 00:08:55.791 00:08:55.791 Suite: components_suite 00:08:56.358 Test: vtophys_malloc_test ...passed 00:08:56.359 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:56.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.359 EAL: Restoring previous memory policy: 4 00:08:56.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.359 EAL: request: mp_malloc_sync 00:08:56.359 EAL: No shared files mode enabled, IPC is disabled 00:08:56.359 EAL: Heap on socket 0 was expanded by 4MB 00:08:56.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.359 EAL: request: mp_malloc_sync 00:08:56.359 EAL: No shared files mode enabled, IPC is disabled 00:08:56.359 EAL: Heap on socket 0 was shrunk by 4MB 00:08:56.359 EAL: Trying to obtain current memory policy. 00:08:56.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.359 EAL: Restoring previous memory policy: 4 00:08:56.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.359 EAL: request: mp_malloc_sync 00:08:56.359 EAL: No shared files mode enabled, IPC is disabled 00:08:56.359 EAL: Heap on socket 0 was expanded by 6MB 00:08:56.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.359 EAL: request: mp_malloc_sync 00:08:56.359 EAL: No shared files mode enabled, IPC is disabled 00:08:56.359 EAL: Heap on socket 0 was shrunk by 6MB 00:08:56.359 EAL: Trying to obtain current memory policy. 00:08:56.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.359 EAL: Restoring previous memory policy: 4 00:08:56.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.359 EAL: request: mp_malloc_sync 00:08:56.359 EAL: No shared files mode enabled, IPC is disabled 00:08:56.359 EAL: Heap on socket 0 was expanded by 10MB 00:08:56.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.359 EAL: request: mp_malloc_sync 00:08:56.359 EAL: No shared files mode enabled, IPC is disabled 00:08:56.359 EAL: Heap on socket 0 was shrunk by 10MB 00:08:56.359 EAL: Trying to obtain current memory policy. 00:08:56.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.359 EAL: Restoring previous memory policy: 4 00:08:56.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.359 EAL: request: mp_malloc_sync 00:08:56.359 EAL: No shared files mode enabled, IPC is disabled 00:08:56.359 EAL: Heap on socket 0 was expanded by 18MB 00:08:56.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.359 EAL: request: mp_malloc_sync 00:08:56.359 EAL: No shared files mode enabled, IPC is disabled 00:08:56.359 EAL: Heap on socket 0 was shrunk by 18MB 00:08:56.359 EAL: Trying to obtain current memory policy. 00:08:56.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.359 EAL: Restoring previous memory policy: 4 00:08:56.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.359 EAL: request: mp_malloc_sync 00:08:56.359 EAL: No shared files mode enabled, IPC is disabled 00:08:56.359 EAL: Heap on socket 0 was expanded by 34MB 00:08:56.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.359 EAL: request: mp_malloc_sync 00:08:56.359 EAL: No shared files mode enabled, IPC is disabled 00:08:56.359 EAL: Heap on socket 0 was shrunk by 34MB 00:08:56.617 EAL: Trying to obtain current memory policy. 00:08:56.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.617 EAL: Restoring previous memory policy: 4 00:08:56.617 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.617 EAL: request: mp_malloc_sync 00:08:56.617 EAL: No shared files mode enabled, IPC is disabled 00:08:56.617 EAL: Heap on socket 0 was expanded by 66MB 00:08:56.617 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.617 EAL: request: mp_malloc_sync 00:08:56.617 EAL: No shared files mode enabled, IPC is disabled 00:08:56.617 EAL: Heap on socket 0 was shrunk by 66MB 00:08:56.877 EAL: Trying to obtain current memory policy. 00:08:56.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.877 EAL: Restoring previous memory policy: 4 00:08:56.877 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.877 EAL: request: mp_malloc_sync 00:08:56.877 EAL: No shared files mode enabled, IPC is disabled 00:08:56.877 EAL: Heap on socket 0 was expanded by 130MB 00:08:56.877 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.136 EAL: request: mp_malloc_sync 00:08:57.136 EAL: No shared files mode enabled, IPC is disabled 00:08:57.136 EAL: Heap on socket 0 was shrunk by 130MB 00:08:57.136 EAL: Trying to obtain current memory policy. 00:08:57.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.394 EAL: Restoring previous memory policy: 4 00:08:57.394 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.394 EAL: request: mp_malloc_sync 00:08:57.394 EAL: No shared files mode enabled, IPC is disabled 00:08:57.394 EAL: Heap on socket 0 was expanded by 258MB 00:08:57.652 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.653 EAL: request: mp_malloc_sync 00:08:57.653 EAL: No shared files mode enabled, IPC is disabled 00:08:57.653 EAL: Heap on socket 0 was shrunk by 258MB 00:08:58.219 EAL: Trying to obtain current memory policy. 00:08:58.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:58.219 EAL: Restoring previous memory policy: 4 00:08:58.219 EAL: Calling mem event callback 'spdk:(nil)' 00:08:58.219 EAL: request: mp_malloc_sync 00:08:58.219 EAL: No shared files mode enabled, IPC is disabled 00:08:58.219 EAL: Heap on socket 0 was expanded by 514MB 00:08:59.154 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.154 EAL: request: mp_malloc_sync 00:08:59.154 EAL: No shared files mode enabled, IPC is disabled 00:08:59.154 EAL: Heap on socket 0 was shrunk by 514MB 00:09:00.091 EAL: Trying to obtain current memory policy. 00:09:00.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.091 EAL: Restoring previous memory policy: 4 00:09:00.091 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.091 EAL: request: mp_malloc_sync 00:09:00.091 EAL: No shared files mode enabled, IPC is disabled 00:09:00.091 EAL: Heap on socket 0 was expanded by 1026MB 00:09:01.994 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.994 EAL: request: mp_malloc_sync 00:09:01.994 EAL: No shared files mode enabled, IPC is disabled 00:09:01.994 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:03.368 passed 00:09:03.368 00:09:03.368 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.368 suites 1 1 n/a 0 0 00:09:03.368 tests 2 2 2 0 0 00:09:03.368 asserts 5299 5299 5299 0 n/a 00:09:03.368 00:09:03.368 Elapsed time = 7.609 seconds 00:09:03.368 EAL: Calling mem event callback 'spdk:(nil)' 00:09:03.368 EAL: request: mp_malloc_sync 00:09:03.368 EAL: No shared files mode enabled, IPC is disabled 00:09:03.368 EAL: Heap on socket 0 was shrunk by 2MB 00:09:03.368 EAL: No shared files mode enabled, IPC is disabled 00:09:03.368 EAL: No shared files mode enabled, IPC is disabled 00:09:03.368 EAL: No shared files mode enabled, IPC is disabled 00:09:03.627 ************************************ 00:09:03.627 END TEST env_vtophys 00:09:03.627 ************************************ 00:09:03.627 00:09:03.627 real 0m8.016s 00:09:03.627 user 0m6.772s 00:09:03.627 sys 0m1.070s 00:09:03.627 04:30:51 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.627 04:30:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:03.627 04:30:51 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:03.627 04:30:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.627 04:30:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.627 04:30:51 env -- common/autotest_common.sh@10 -- # set +x 00:09:03.627 ************************************ 00:09:03.627 START TEST env_pci 00:09:03.627 ************************************ 00:09:03.627 04:30:51 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:03.627 00:09:03.627 00:09:03.627 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.627 http://cunit.sourceforge.net/ 00:09:03.627 00:09:03.627 00:09:03.627 Suite: pci 00:09:03.627 Test: pci_hook ...[2024-11-27 04:30:51.123428] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56842 has claimed it 00:09:03.627 EAL: Cannot find device (10000:00:01.0) 00:09:03.627 passed 00:09:03.627 00:09:03.627 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.627 suites 1 1 n/a 0 0 00:09:03.627 tests 1 1 1 0 0 00:09:03.627 asserts 25 25 25 0 n/a 00:09:03.627 00:09:03.627 Elapsed time = 0.008 seconds 00:09:03.627 EAL: Failed to attach device on primary process 00:09:03.627 00:09:03.627 real 0m0.080s 00:09:03.627 user 0m0.037s 00:09:03.627 sys 0m0.042s 00:09:03.627 ************************************ 00:09:03.627 END TEST env_pci 00:09:03.627 ************************************ 00:09:03.627 04:30:51 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.627 04:30:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:03.627 04:30:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:03.627 04:30:51 env -- env/env.sh@15 -- # uname 00:09:03.627 04:30:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:03.627 04:30:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:03.627 04:30:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:03.627 04:30:51 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:03.627 04:30:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.627 04:30:51 env -- common/autotest_common.sh@10 -- # set +x 00:09:03.627 ************************************ 00:09:03.627 START TEST env_dpdk_post_init 00:09:03.627 ************************************ 00:09:03.627 04:30:51 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:03.886 EAL: Detected CPU lcores: 10 00:09:03.886 EAL: Detected NUMA nodes: 1 00:09:03.886 EAL: Detected shared linkage of DPDK 00:09:03.886 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:03.886 EAL: Selected IOVA mode 'PA' 00:09:03.886 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:03.886 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:03.886 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:04.144 Starting DPDK initialization... 00:09:04.144 Starting SPDK post initialization... 00:09:04.144 SPDK NVMe probe 00:09:04.144 Attaching to 0000:00:10.0 00:09:04.144 Attaching to 0000:00:11.0 00:09:04.144 Attached to 0000:00:10.0 00:09:04.144 Attached to 0000:00:11.0 00:09:04.144 Cleaning up... 00:09:04.144 00:09:04.144 real 0m0.306s 00:09:04.144 user 0m0.105s 00:09:04.144 sys 0m0.100s 00:09:04.144 04:30:51 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.144 ************************************ 00:09:04.144 END TEST env_dpdk_post_init 00:09:04.144 ************************************ 00:09:04.144 04:30:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:04.144 04:30:51 env -- env/env.sh@26 -- # uname 00:09:04.144 04:30:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:04.144 04:30:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:04.144 04:30:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.144 04:30:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.144 04:30:51 env -- common/autotest_common.sh@10 -- # set +x 00:09:04.144 ************************************ 00:09:04.144 START TEST env_mem_callbacks 00:09:04.144 ************************************ 00:09:04.144 04:30:51 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:04.144 EAL: Detected CPU lcores: 10 00:09:04.144 EAL: Detected NUMA nodes: 1 00:09:04.144 EAL: Detected shared linkage of DPDK 00:09:04.144 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:04.144 EAL: Selected IOVA mode 'PA' 00:09:04.403 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:04.403 00:09:04.403 00:09:04.403 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.403 http://cunit.sourceforge.net/ 00:09:04.403 00:09:04.403 00:09:04.403 Suite: memory 00:09:04.403 Test: test ... 00:09:04.403 register 0x200000200000 2097152 00:09:04.403 malloc 3145728 00:09:04.403 register 0x200000400000 4194304 00:09:04.403 buf 0x2000004fffc0 len 3145728 PASSED 00:09:04.403 malloc 64 00:09:04.403 buf 0x2000004ffec0 len 64 PASSED 00:09:04.403 malloc 4194304 00:09:04.403 register 0x200000800000 6291456 00:09:04.403 buf 0x2000009fffc0 len 4194304 PASSED 00:09:04.403 free 0x2000004fffc0 3145728 00:09:04.403 free 0x2000004ffec0 64 00:09:04.403 unregister 0x200000400000 4194304 PASSED 00:09:04.403 free 0x2000009fffc0 4194304 00:09:04.403 unregister 0x200000800000 6291456 PASSED 00:09:04.403 malloc 8388608 00:09:04.403 register 0x200000400000 10485760 00:09:04.403 buf 0x2000005fffc0 len 8388608 PASSED 00:09:04.403 free 0x2000005fffc0 8388608 00:09:04.403 unregister 0x200000400000 10485760 PASSED 00:09:04.403 passed 00:09:04.403 00:09:04.403 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.403 suites 1 1 n/a 0 0 00:09:04.403 tests 1 1 1 0 0 00:09:04.403 asserts 15 15 15 0 n/a 00:09:04.403 00:09:04.403 Elapsed time = 0.065 seconds 00:09:04.403 00:09:04.403 real 0m0.281s 00:09:04.403 user 0m0.097s 00:09:04.403 sys 0m0.079s 00:09:04.404 04:30:51 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.404 04:30:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:04.404 ************************************ 00:09:04.404 END TEST env_mem_callbacks 00:09:04.404 ************************************ 00:09:04.404 00:09:04.404 real 0m9.522s 00:09:04.404 user 0m7.537s 00:09:04.404 sys 0m1.579s 00:09:04.404 04:30:51 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.404 04:30:51 env -- common/autotest_common.sh@10 -- # set +x 00:09:04.404 ************************************ 00:09:04.404 END TEST env 00:09:04.404 ************************************ 00:09:04.404 04:30:51 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:04.404 04:30:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.404 04:30:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.404 04:30:51 -- common/autotest_common.sh@10 -- # set +x 00:09:04.404 ************************************ 00:09:04.404 START TEST rpc 00:09:04.404 ************************************ 00:09:04.404 04:30:51 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:04.662 * Looking for test storage... 00:09:04.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:04.662 04:30:52 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.662 04:30:52 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.662 04:30:52 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.662 04:30:52 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.662 04:30:52 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.662 04:30:52 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.662 04:30:52 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.662 04:30:52 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.662 04:30:52 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.662 04:30:52 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.662 04:30:52 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.662 04:30:52 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:04.662 04:30:52 rpc -- scripts/common.sh@345 -- # : 1 00:09:04.662 04:30:52 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.662 04:30:52 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.662 04:30:52 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:04.662 04:30:52 rpc -- scripts/common.sh@353 -- # local d=1 00:09:04.662 04:30:52 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.662 04:30:52 rpc -- scripts/common.sh@355 -- # echo 1 00:09:04.662 04:30:52 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.662 04:30:52 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:04.662 04:30:52 rpc -- scripts/common.sh@353 -- # local d=2 00:09:04.662 04:30:52 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.662 04:30:52 rpc -- scripts/common.sh@355 -- # echo 2 00:09:04.662 04:30:52 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.662 04:30:52 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.662 04:30:52 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.662 04:30:52 rpc -- scripts/common.sh@368 -- # return 0 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:04.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.662 --rc genhtml_branch_coverage=1 00:09:04.662 --rc genhtml_function_coverage=1 00:09:04.662 --rc genhtml_legend=1 00:09:04.662 --rc geninfo_all_blocks=1 00:09:04.662 --rc geninfo_unexecuted_blocks=1 00:09:04.662 00:09:04.662 ' 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:04.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.662 --rc genhtml_branch_coverage=1 00:09:04.662 --rc genhtml_function_coverage=1 00:09:04.662 --rc genhtml_legend=1 00:09:04.662 --rc geninfo_all_blocks=1 00:09:04.662 --rc geninfo_unexecuted_blocks=1 00:09:04.662 00:09:04.662 ' 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:04.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.662 --rc genhtml_branch_coverage=1 00:09:04.662 --rc genhtml_function_coverage=1 00:09:04.662 --rc genhtml_legend=1 00:09:04.662 --rc geninfo_all_blocks=1 00:09:04.662 --rc geninfo_unexecuted_blocks=1 00:09:04.662 00:09:04.662 ' 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:04.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.662 --rc genhtml_branch_coverage=1 00:09:04.662 --rc genhtml_function_coverage=1 00:09:04.662 --rc genhtml_legend=1 00:09:04.662 --rc geninfo_all_blocks=1 00:09:04.662 --rc geninfo_unexecuted_blocks=1 00:09:04.662 00:09:04.662 ' 00:09:04.662 04:30:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56969 00:09:04.662 04:30:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:04.662 04:30:52 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:04.662 04:30:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56969 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@835 -- # '[' -z 56969 ']' 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.662 04:30:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.920 [2024-11-27 04:30:52.285348] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:04.920 [2024-11-27 04:30:52.285757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56969 ] 00:09:04.920 [2024-11-27 04:30:52.478210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.179 [2024-11-27 04:30:52.637443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:05.179 [2024-11-27 04:30:52.637852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56969' to capture a snapshot of events at runtime. 00:09:05.179 [2024-11-27 04:30:52.638032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.179 [2024-11-27 04:30:52.638264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.179 [2024-11-27 04:30:52.638390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56969 for offline analysis/debug. 00:09:05.179 [2024-11-27 04:30:52.640153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.114 04:30:53 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.114 04:30:53 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:06.114 04:30:53 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:06.114 04:30:53 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:06.114 04:30:53 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:06.114 04:30:53 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:06.114 04:30:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.114 04:30:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.114 04:30:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.114 ************************************ 00:09:06.114 START TEST rpc_integrity 00:09:06.114 ************************************ 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:06.114 { 00:09:06.114 "name": "Malloc0", 00:09:06.114 "aliases": [ 00:09:06.114 "dc19befa-6f2b-4407-bf65-0500004f4474" 00:09:06.114 ], 00:09:06.114 "product_name": "Malloc disk", 00:09:06.114 "block_size": 512, 00:09:06.114 "num_blocks": 16384, 00:09:06.114 "uuid": "dc19befa-6f2b-4407-bf65-0500004f4474", 00:09:06.114 "assigned_rate_limits": { 00:09:06.114 "rw_ios_per_sec": 0, 00:09:06.114 "rw_mbytes_per_sec": 0, 00:09:06.114 "r_mbytes_per_sec": 0, 00:09:06.114 "w_mbytes_per_sec": 0 00:09:06.114 }, 00:09:06.114 "claimed": false, 00:09:06.114 "zoned": false, 00:09:06.114 "supported_io_types": { 00:09:06.114 "read": true, 00:09:06.114 "write": true, 00:09:06.114 "unmap": true, 00:09:06.114 "flush": true, 00:09:06.114 "reset": true, 00:09:06.114 "nvme_admin": false, 00:09:06.114 "nvme_io": false, 00:09:06.114 "nvme_io_md": false, 00:09:06.114 "write_zeroes": true, 00:09:06.114 "zcopy": true, 00:09:06.114 "get_zone_info": false, 00:09:06.114 "zone_management": false, 00:09:06.114 "zone_append": false, 00:09:06.114 "compare": false, 00:09:06.114 "compare_and_write": false, 00:09:06.114 "abort": true, 00:09:06.114 "seek_hole": false, 00:09:06.114 "seek_data": false, 00:09:06.114 "copy": true, 00:09:06.114 "nvme_iov_md": false 00:09:06.114 }, 00:09:06.114 "memory_domains": [ 00:09:06.114 { 00:09:06.114 "dma_device_id": "system", 00:09:06.114 "dma_device_type": 1 00:09:06.114 }, 00:09:06.114 { 00:09:06.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.114 "dma_device_type": 2 00:09:06.114 } 00:09:06.114 ], 00:09:06.114 "driver_specific": {} 00:09:06.114 } 00:09:06.114 ]' 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:06.114 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.114 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.115 [2024-11-27 04:30:53.715757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:06.115 [2024-11-27 04:30:53.716031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.115 [2024-11-27 04:30:53.716077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:06.115 [2024-11-27 04:30:53.716104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.115 [2024-11-27 04:30:53.719422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.115 [2024-11-27 04:30:53.719605] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:06.115 Passthru0 00:09:06.115 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.115 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:06.115 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.115 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.374 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.374 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:06.374 { 00:09:06.374 "name": "Malloc0", 00:09:06.374 "aliases": [ 00:09:06.374 "dc19befa-6f2b-4407-bf65-0500004f4474" 00:09:06.374 ], 00:09:06.374 "product_name": "Malloc disk", 00:09:06.374 "block_size": 512, 00:09:06.374 "num_blocks": 16384, 00:09:06.374 "uuid": "dc19befa-6f2b-4407-bf65-0500004f4474", 00:09:06.374 "assigned_rate_limits": { 00:09:06.374 "rw_ios_per_sec": 0, 00:09:06.374 "rw_mbytes_per_sec": 0, 00:09:06.374 "r_mbytes_per_sec": 0, 00:09:06.374 "w_mbytes_per_sec": 0 00:09:06.374 }, 00:09:06.374 "claimed": true, 00:09:06.374 "claim_type": "exclusive_write", 00:09:06.374 "zoned": false, 00:09:06.374 "supported_io_types": { 00:09:06.374 "read": true, 00:09:06.374 "write": true, 00:09:06.374 "unmap": true, 00:09:06.374 "flush": true, 00:09:06.374 "reset": true, 00:09:06.374 "nvme_admin": false, 00:09:06.374 "nvme_io": false, 00:09:06.374 "nvme_io_md": false, 00:09:06.374 "write_zeroes": true, 00:09:06.374 "zcopy": true, 00:09:06.374 "get_zone_info": false, 00:09:06.374 "zone_management": false, 00:09:06.374 "zone_append": false, 00:09:06.374 "compare": false, 00:09:06.374 "compare_and_write": false, 00:09:06.374 "abort": true, 00:09:06.374 "seek_hole": false, 00:09:06.374 "seek_data": false, 00:09:06.374 "copy": true, 00:09:06.374 "nvme_iov_md": false 00:09:06.374 }, 00:09:06.374 "memory_domains": [ 00:09:06.374 { 00:09:06.374 "dma_device_id": "system", 00:09:06.374 "dma_device_type": 1 00:09:06.374 }, 00:09:06.374 { 00:09:06.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.374 "dma_device_type": 2 00:09:06.374 } 00:09:06.374 ], 00:09:06.374 "driver_specific": {} 00:09:06.374 }, 00:09:06.374 { 00:09:06.374 "name": "Passthru0", 00:09:06.374 "aliases": [ 00:09:06.374 "5b0519b7-f32c-5609-9942-b87136060e4e" 00:09:06.374 ], 00:09:06.374 "product_name": "passthru", 00:09:06.374 "block_size": 512, 00:09:06.374 "num_blocks": 16384, 00:09:06.374 "uuid": "5b0519b7-f32c-5609-9942-b87136060e4e", 00:09:06.374 "assigned_rate_limits": { 00:09:06.374 "rw_ios_per_sec": 0, 00:09:06.374 "rw_mbytes_per_sec": 0, 00:09:06.374 "r_mbytes_per_sec": 0, 00:09:06.374 "w_mbytes_per_sec": 0 00:09:06.374 }, 00:09:06.374 "claimed": false, 00:09:06.374 "zoned": false, 00:09:06.374 "supported_io_types": { 00:09:06.374 "read": true, 00:09:06.374 "write": true, 00:09:06.374 "unmap": true, 00:09:06.374 "flush": true, 00:09:06.374 "reset": true, 00:09:06.374 "nvme_admin": false, 00:09:06.374 "nvme_io": false, 00:09:06.374 "nvme_io_md": false, 00:09:06.374 "write_zeroes": true, 00:09:06.374 "zcopy": true, 00:09:06.374 "get_zone_info": false, 00:09:06.374 "zone_management": false, 00:09:06.374 "zone_append": false, 00:09:06.374 "compare": false, 00:09:06.374 "compare_and_write": false, 00:09:06.374 "abort": true, 00:09:06.374 "seek_hole": false, 00:09:06.374 "seek_data": false, 00:09:06.374 "copy": true, 00:09:06.374 "nvme_iov_md": false 00:09:06.374 }, 00:09:06.374 "memory_domains": [ 00:09:06.374 { 00:09:06.374 "dma_device_id": "system", 00:09:06.374 "dma_device_type": 1 00:09:06.374 }, 00:09:06.374 { 00:09:06.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.374 "dma_device_type": 2 00:09:06.374 } 00:09:06.374 ], 00:09:06.374 "driver_specific": { 00:09:06.374 "passthru": { 00:09:06.374 "name": "Passthru0", 00:09:06.374 "base_bdev_name": "Malloc0" 00:09:06.374 } 00:09:06.374 } 00:09:06.374 } 00:09:06.374 ]' 00:09:06.374 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:06.375 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:06.375 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.375 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.375 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.375 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:06.375 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:06.375 ************************************ 00:09:06.375 END TEST rpc_integrity 00:09:06.375 ************************************ 00:09:06.375 04:30:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:06.375 00:09:06.375 real 0m0.352s 00:09:06.375 user 0m0.207s 00:09:06.375 sys 0m0.046s 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.375 04:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.375 04:30:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:06.375 04:30:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.375 04:30:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.375 04:30:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.375 ************************************ 00:09:06.375 START TEST rpc_plugins 00:09:06.375 ************************************ 00:09:06.375 04:30:53 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:06.375 04:30:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:06.375 04:30:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.375 04:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:06.375 04:30:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.375 04:30:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:06.375 04:30:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:06.375 04:30:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.375 04:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:06.375 04:30:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.375 04:30:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:06.375 { 00:09:06.375 "name": "Malloc1", 00:09:06.375 "aliases": [ 00:09:06.375 "e80450cb-ea3f-4809-91c1-add9b6df44de" 00:09:06.375 ], 00:09:06.375 "product_name": "Malloc disk", 00:09:06.375 "block_size": 4096, 00:09:06.375 "num_blocks": 256, 00:09:06.375 "uuid": "e80450cb-ea3f-4809-91c1-add9b6df44de", 00:09:06.375 "assigned_rate_limits": { 00:09:06.375 "rw_ios_per_sec": 0, 00:09:06.375 "rw_mbytes_per_sec": 0, 00:09:06.375 "r_mbytes_per_sec": 0, 00:09:06.375 "w_mbytes_per_sec": 0 00:09:06.375 }, 00:09:06.375 "claimed": false, 00:09:06.375 "zoned": false, 00:09:06.375 "supported_io_types": { 00:09:06.375 "read": true, 00:09:06.375 "write": true, 00:09:06.375 "unmap": true, 00:09:06.375 "flush": true, 00:09:06.375 "reset": true, 00:09:06.375 "nvme_admin": false, 00:09:06.375 "nvme_io": false, 00:09:06.375 "nvme_io_md": false, 00:09:06.375 "write_zeroes": true, 00:09:06.375 "zcopy": true, 00:09:06.375 "get_zone_info": false, 00:09:06.375 "zone_management": false, 00:09:06.375 "zone_append": false, 00:09:06.375 "compare": false, 00:09:06.375 "compare_and_write": false, 00:09:06.375 "abort": true, 00:09:06.375 "seek_hole": false, 00:09:06.375 "seek_data": false, 00:09:06.375 "copy": true, 00:09:06.375 "nvme_iov_md": false 00:09:06.375 }, 00:09:06.375 "memory_domains": [ 00:09:06.375 { 00:09:06.375 "dma_device_id": "system", 00:09:06.375 "dma_device_type": 1 00:09:06.375 }, 00:09:06.375 { 00:09:06.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.375 "dma_device_type": 2 00:09:06.375 } 00:09:06.375 ], 00:09:06.375 "driver_specific": {} 00:09:06.375 } 00:09:06.375 ]' 00:09:06.634 04:30:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:06.634 04:30:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:06.634 04:30:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:06.634 04:30:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.634 04:30:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:06.634 04:30:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.634 04:30:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:06.634 04:30:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.634 04:30:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:06.634 04:30:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.634 04:30:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:06.634 04:30:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:06.634 ************************************ 00:09:06.634 END TEST rpc_plugins 00:09:06.634 ************************************ 00:09:06.634 04:30:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:06.634 00:09:06.634 real 0m0.168s 00:09:06.634 user 0m0.104s 00:09:06.634 sys 0m0.021s 00:09:06.634 04:30:54 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.634 04:30:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:06.634 04:30:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:06.634 04:30:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.634 04:30:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.634 04:30:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.634 ************************************ 00:09:06.634 START TEST rpc_trace_cmd_test 00:09:06.634 ************************************ 00:09:06.634 04:30:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:06.634 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:06.634 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:06.634 04:30:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.634 04:30:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.634 04:30:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.634 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:06.634 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56969", 00:09:06.634 "tpoint_group_mask": "0x8", 00:09:06.634 "iscsi_conn": { 00:09:06.634 "mask": "0x2", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "scsi": { 00:09:06.634 "mask": "0x4", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "bdev": { 00:09:06.634 "mask": "0x8", 00:09:06.634 "tpoint_mask": "0xffffffffffffffff" 00:09:06.634 }, 00:09:06.634 "nvmf_rdma": { 00:09:06.634 "mask": "0x10", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "nvmf_tcp": { 00:09:06.634 "mask": "0x20", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "ftl": { 00:09:06.634 "mask": "0x40", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "blobfs": { 00:09:06.634 "mask": "0x80", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "dsa": { 00:09:06.634 "mask": "0x200", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "thread": { 00:09:06.634 "mask": "0x400", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "nvme_pcie": { 00:09:06.634 "mask": "0x800", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "iaa": { 00:09:06.634 "mask": "0x1000", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "nvme_tcp": { 00:09:06.634 "mask": "0x2000", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "bdev_nvme": { 00:09:06.634 "mask": "0x4000", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "sock": { 00:09:06.634 "mask": "0x8000", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "blob": { 00:09:06.634 "mask": "0x10000", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "bdev_raid": { 00:09:06.634 "mask": "0x20000", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 }, 00:09:06.634 "scheduler": { 00:09:06.634 "mask": "0x40000", 00:09:06.634 "tpoint_mask": "0x0" 00:09:06.634 } 00:09:06.634 }' 00:09:06.634 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:06.634 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:06.893 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:06.893 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:06.893 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:06.893 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:06.893 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:06.893 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:06.893 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:06.893 ************************************ 00:09:06.893 END TEST rpc_trace_cmd_test 00:09:06.893 ************************************ 00:09:06.893 04:30:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:06.893 00:09:06.893 real 0m0.277s 00:09:06.893 user 0m0.231s 00:09:06.893 sys 0m0.036s 00:09:06.893 04:30:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.893 04:30:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.893 04:30:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:06.893 04:30:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:06.893 04:30:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:06.893 04:30:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.893 04:30:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.893 04:30:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.893 ************************************ 00:09:06.893 START TEST rpc_daemon_integrity 00:09:06.893 ************************************ 00:09:06.893 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:06.893 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:06.893 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.893 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:07.152 { 00:09:07.152 "name": "Malloc2", 00:09:07.152 "aliases": [ 00:09:07.152 "b580f79c-ff2d-4a46-a513-faf4e20c3c8f" 00:09:07.152 ], 00:09:07.152 "product_name": "Malloc disk", 00:09:07.152 "block_size": 512, 00:09:07.152 "num_blocks": 16384, 00:09:07.152 "uuid": "b580f79c-ff2d-4a46-a513-faf4e20c3c8f", 00:09:07.152 "assigned_rate_limits": { 00:09:07.152 "rw_ios_per_sec": 0, 00:09:07.152 "rw_mbytes_per_sec": 0, 00:09:07.152 "r_mbytes_per_sec": 0, 00:09:07.152 "w_mbytes_per_sec": 0 00:09:07.152 }, 00:09:07.152 "claimed": false, 00:09:07.152 "zoned": false, 00:09:07.152 "supported_io_types": { 00:09:07.152 "read": true, 00:09:07.152 "write": true, 00:09:07.152 "unmap": true, 00:09:07.152 "flush": true, 00:09:07.152 "reset": true, 00:09:07.152 "nvme_admin": false, 00:09:07.152 "nvme_io": false, 00:09:07.152 "nvme_io_md": false, 00:09:07.152 "write_zeroes": true, 00:09:07.152 "zcopy": true, 00:09:07.152 "get_zone_info": false, 00:09:07.152 "zone_management": false, 00:09:07.152 "zone_append": false, 00:09:07.152 "compare": false, 00:09:07.152 "compare_and_write": false, 00:09:07.152 "abort": true, 00:09:07.152 "seek_hole": false, 00:09:07.152 "seek_data": false, 00:09:07.152 "copy": true, 00:09:07.152 "nvme_iov_md": false 00:09:07.152 }, 00:09:07.152 "memory_domains": [ 00:09:07.152 { 00:09:07.152 "dma_device_id": "system", 00:09:07.152 "dma_device_type": 1 00:09:07.152 }, 00:09:07.152 { 00:09:07.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.152 "dma_device_type": 2 00:09:07.152 } 00:09:07.152 ], 00:09:07.152 "driver_specific": {} 00:09:07.152 } 00:09:07.152 ]' 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.152 [2024-11-27 04:30:54.675702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:07.152 [2024-11-27 04:30:54.675831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.152 [2024-11-27 04:30:54.675872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:07.152 [2024-11-27 04:30:54.675897] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.152 [2024-11-27 04:30:54.679366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.152 [2024-11-27 04:30:54.679427] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:07.152 Passthru0 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.152 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:07.152 { 00:09:07.152 "name": "Malloc2", 00:09:07.152 "aliases": [ 00:09:07.152 "b580f79c-ff2d-4a46-a513-faf4e20c3c8f" 00:09:07.152 ], 00:09:07.152 "product_name": "Malloc disk", 00:09:07.152 "block_size": 512, 00:09:07.152 "num_blocks": 16384, 00:09:07.152 "uuid": "b580f79c-ff2d-4a46-a513-faf4e20c3c8f", 00:09:07.152 "assigned_rate_limits": { 00:09:07.152 "rw_ios_per_sec": 0, 00:09:07.152 "rw_mbytes_per_sec": 0, 00:09:07.152 "r_mbytes_per_sec": 0, 00:09:07.152 "w_mbytes_per_sec": 0 00:09:07.152 }, 00:09:07.152 "claimed": true, 00:09:07.152 "claim_type": "exclusive_write", 00:09:07.152 "zoned": false, 00:09:07.152 "supported_io_types": { 00:09:07.152 "read": true, 00:09:07.152 "write": true, 00:09:07.152 "unmap": true, 00:09:07.152 "flush": true, 00:09:07.152 "reset": true, 00:09:07.152 "nvme_admin": false, 00:09:07.152 "nvme_io": false, 00:09:07.152 "nvme_io_md": false, 00:09:07.152 "write_zeroes": true, 00:09:07.152 "zcopy": true, 00:09:07.152 "get_zone_info": false, 00:09:07.152 "zone_management": false, 00:09:07.152 "zone_append": false, 00:09:07.152 "compare": false, 00:09:07.152 "compare_and_write": false, 00:09:07.152 "abort": true, 00:09:07.152 "seek_hole": false, 00:09:07.152 "seek_data": false, 00:09:07.152 "copy": true, 00:09:07.152 "nvme_iov_md": false 00:09:07.152 }, 00:09:07.152 "memory_domains": [ 00:09:07.152 { 00:09:07.152 "dma_device_id": "system", 00:09:07.152 "dma_device_type": 1 00:09:07.152 }, 00:09:07.152 { 00:09:07.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.152 "dma_device_type": 2 00:09:07.152 } 00:09:07.152 ], 00:09:07.152 "driver_specific": {} 00:09:07.152 }, 00:09:07.152 { 00:09:07.152 "name": "Passthru0", 00:09:07.152 "aliases": [ 00:09:07.152 "ddd2ef96-c529-5bb2-99c6-21b881d08260" 00:09:07.152 ], 00:09:07.152 "product_name": "passthru", 00:09:07.153 "block_size": 512, 00:09:07.153 "num_blocks": 16384, 00:09:07.153 "uuid": "ddd2ef96-c529-5bb2-99c6-21b881d08260", 00:09:07.153 "assigned_rate_limits": { 00:09:07.153 "rw_ios_per_sec": 0, 00:09:07.153 "rw_mbytes_per_sec": 0, 00:09:07.153 "r_mbytes_per_sec": 0, 00:09:07.153 "w_mbytes_per_sec": 0 00:09:07.153 }, 00:09:07.153 "claimed": false, 00:09:07.153 "zoned": false, 00:09:07.153 "supported_io_types": { 00:09:07.153 "read": true, 00:09:07.153 "write": true, 00:09:07.153 "unmap": true, 00:09:07.153 "flush": true, 00:09:07.153 "reset": true, 00:09:07.153 "nvme_admin": false, 00:09:07.153 "nvme_io": false, 00:09:07.153 "nvme_io_md": false, 00:09:07.153 "write_zeroes": true, 00:09:07.153 "zcopy": true, 00:09:07.153 "get_zone_info": false, 00:09:07.153 "zone_management": false, 00:09:07.153 "zone_append": false, 00:09:07.153 "compare": false, 00:09:07.153 "compare_and_write": false, 00:09:07.153 "abort": true, 00:09:07.153 "seek_hole": false, 00:09:07.153 "seek_data": false, 00:09:07.153 "copy": true, 00:09:07.153 "nvme_iov_md": false 00:09:07.153 }, 00:09:07.153 "memory_domains": [ 00:09:07.153 { 00:09:07.153 "dma_device_id": "system", 00:09:07.153 "dma_device_type": 1 00:09:07.153 }, 00:09:07.153 { 00:09:07.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.153 "dma_device_type": 2 00:09:07.153 } 00:09:07.153 ], 00:09:07.153 "driver_specific": { 00:09:07.153 "passthru": { 00:09:07.153 "name": "Passthru0", 00:09:07.153 "base_bdev_name": "Malloc2" 00:09:07.153 } 00:09:07.153 } 00:09:07.153 } 00:09:07.153 ]' 00:09:07.153 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:07.153 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:07.153 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:07.153 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.153 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.153 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.153 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:07.153 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.153 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.411 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.411 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:07.411 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.411 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.411 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.411 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:07.411 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:07.411 ************************************ 00:09:07.411 END TEST rpc_daemon_integrity 00:09:07.411 ************************************ 00:09:07.411 04:30:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:07.411 00:09:07.411 real 0m0.358s 00:09:07.411 user 0m0.216s 00:09:07.411 sys 0m0.040s 00:09:07.411 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.411 04:30:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.411 04:30:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:07.411 04:30:54 rpc -- rpc/rpc.sh@84 -- # killprocess 56969 00:09:07.411 04:30:54 rpc -- common/autotest_common.sh@954 -- # '[' -z 56969 ']' 00:09:07.411 04:30:54 rpc -- common/autotest_common.sh@958 -- # kill -0 56969 00:09:07.411 04:30:54 rpc -- common/autotest_common.sh@959 -- # uname 00:09:07.411 04:30:54 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.411 04:30:54 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56969 00:09:07.411 killing process with pid 56969 00:09:07.411 04:30:54 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.411 04:30:54 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.411 04:30:54 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56969' 00:09:07.411 04:30:54 rpc -- common/autotest_common.sh@973 -- # kill 56969 00:09:07.411 04:30:54 rpc -- common/autotest_common.sh@978 -- # wait 56969 00:09:09.945 00:09:09.945 real 0m5.451s 00:09:09.945 user 0m6.089s 00:09:09.945 sys 0m0.934s 00:09:09.945 04:30:57 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.945 ************************************ 00:09:09.945 END TEST rpc 00:09:09.945 ************************************ 00:09:09.945 04:30:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.945 04:30:57 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:09.945 04:30:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.945 04:30:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.945 04:30:57 -- common/autotest_common.sh@10 -- # set +x 00:09:09.945 ************************************ 00:09:09.945 START TEST skip_rpc 00:09:09.945 ************************************ 00:09:09.945 04:30:57 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:09.945 * Looking for test storage... 00:09:09.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:09.945 04:30:57 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.945 04:30:57 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.945 04:30:57 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.204 04:30:57 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.204 04:30:57 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:10.204 04:30:57 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.204 04:30:57 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.204 --rc genhtml_branch_coverage=1 00:09:10.204 --rc genhtml_function_coverage=1 00:09:10.204 --rc genhtml_legend=1 00:09:10.204 --rc geninfo_all_blocks=1 00:09:10.204 --rc geninfo_unexecuted_blocks=1 00:09:10.204 00:09:10.204 ' 00:09:10.204 04:30:57 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.204 --rc genhtml_branch_coverage=1 00:09:10.204 --rc genhtml_function_coverage=1 00:09:10.204 --rc genhtml_legend=1 00:09:10.204 --rc geninfo_all_blocks=1 00:09:10.204 --rc geninfo_unexecuted_blocks=1 00:09:10.204 00:09:10.204 ' 00:09:10.204 04:30:57 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.204 --rc genhtml_branch_coverage=1 00:09:10.204 --rc genhtml_function_coverage=1 00:09:10.204 --rc genhtml_legend=1 00:09:10.204 --rc geninfo_all_blocks=1 00:09:10.204 --rc geninfo_unexecuted_blocks=1 00:09:10.204 00:09:10.204 ' 00:09:10.204 04:30:57 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.204 --rc genhtml_branch_coverage=1 00:09:10.204 --rc genhtml_function_coverage=1 00:09:10.204 --rc genhtml_legend=1 00:09:10.204 --rc geninfo_all_blocks=1 00:09:10.204 --rc geninfo_unexecuted_blocks=1 00:09:10.204 00:09:10.204 ' 00:09:10.205 04:30:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:10.205 04:30:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:10.205 04:30:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:10.205 04:30:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.205 04:30:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.205 04:30:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.205 ************************************ 00:09:10.205 START TEST skip_rpc 00:09:10.205 ************************************ 00:09:10.205 04:30:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:10.205 04:30:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57204 00:09:10.205 04:30:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:10.205 04:30:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:10.205 04:30:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:10.205 [2024-11-27 04:30:57.794450] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:10.205 [2024-11-27 04:30:57.794948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57204 ] 00:09:10.463 [2024-11-27 04:30:57.987409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.722 [2024-11-27 04:30:58.162405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.988 04:31:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:15.988 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:15.988 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:15.988 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:15.988 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.988 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:15.988 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.988 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57204 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57204 ']' 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57204 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57204 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57204' 00:09:15.989 killing process with pid 57204 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57204 00:09:15.989 04:31:02 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57204 00:09:17.364 ************************************ 00:09:17.364 END TEST skip_rpc 00:09:17.364 ************************************ 00:09:17.364 00:09:17.364 real 0m7.259s 00:09:17.364 user 0m6.681s 00:09:17.364 sys 0m0.459s 00:09:17.364 04:31:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.364 04:31:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.364 04:31:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:17.364 04:31:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.364 04:31:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.364 04:31:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.364 ************************************ 00:09:17.364 START TEST skip_rpc_with_json 00:09:17.364 ************************************ 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:17.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57308 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57308 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57308 ']' 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.364 04:31:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:17.622 [2024-11-27 04:31:05.087734] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:17.622 [2024-11-27 04:31:05.087938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57308 ] 00:09:17.880 [2024-11-27 04:31:05.270372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.880 [2024-11-27 04:31:05.426406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.813 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.813 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:18.813 04:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:18.813 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.813 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:18.813 [2024-11-27 04:31:06.314865] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:18.813 request: 00:09:18.813 { 00:09:18.813 "trtype": "tcp", 00:09:18.813 "method": "nvmf_get_transports", 00:09:18.814 "req_id": 1 00:09:18.814 } 00:09:18.814 Got JSON-RPC error response 00:09:18.814 response: 00:09:18.814 { 00:09:18.814 "code": -19, 00:09:18.814 "message": "No such device" 00:09:18.814 } 00:09:18.814 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:18.814 04:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:18.814 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.814 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:18.814 [2024-11-27 04:31:06.327005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.814 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.814 04:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:18.814 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.814 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:19.072 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.072 04:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:19.072 { 00:09:19.072 "subsystems": [ 00:09:19.072 { 00:09:19.072 "subsystem": "fsdev", 00:09:19.072 "config": [ 00:09:19.072 { 00:09:19.072 "method": "fsdev_set_opts", 00:09:19.072 "params": { 00:09:19.072 "fsdev_io_pool_size": 65535, 00:09:19.072 "fsdev_io_cache_size": 256 00:09:19.072 } 00:09:19.072 } 00:09:19.072 ] 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "subsystem": "keyring", 00:09:19.072 "config": [] 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "subsystem": "iobuf", 00:09:19.072 "config": [ 00:09:19.072 { 00:09:19.072 "method": "iobuf_set_options", 00:09:19.072 "params": { 00:09:19.072 "small_pool_count": 8192, 00:09:19.072 "large_pool_count": 1024, 00:09:19.072 "small_bufsize": 8192, 00:09:19.072 "large_bufsize": 135168, 00:09:19.072 "enable_numa": false 00:09:19.072 } 00:09:19.072 } 00:09:19.072 ] 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "subsystem": "sock", 00:09:19.072 "config": [ 00:09:19.072 { 00:09:19.072 "method": "sock_set_default_impl", 00:09:19.072 "params": { 00:09:19.072 "impl_name": "posix" 00:09:19.072 } 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "method": "sock_impl_set_options", 00:09:19.072 "params": { 00:09:19.072 "impl_name": "ssl", 00:09:19.072 "recv_buf_size": 4096, 00:09:19.072 "send_buf_size": 4096, 00:09:19.072 "enable_recv_pipe": true, 00:09:19.072 "enable_quickack": false, 00:09:19.072 "enable_placement_id": 0, 00:09:19.072 "enable_zerocopy_send_server": true, 00:09:19.072 "enable_zerocopy_send_client": false, 00:09:19.072 "zerocopy_threshold": 0, 00:09:19.072 "tls_version": 0, 00:09:19.072 "enable_ktls": false 00:09:19.072 } 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "method": "sock_impl_set_options", 00:09:19.072 "params": { 00:09:19.072 "impl_name": "posix", 00:09:19.072 "recv_buf_size": 2097152, 00:09:19.072 "send_buf_size": 2097152, 00:09:19.072 "enable_recv_pipe": true, 00:09:19.072 "enable_quickack": false, 00:09:19.072 "enable_placement_id": 0, 00:09:19.072 "enable_zerocopy_send_server": true, 00:09:19.072 "enable_zerocopy_send_client": false, 00:09:19.072 "zerocopy_threshold": 0, 00:09:19.072 "tls_version": 0, 00:09:19.072 "enable_ktls": false 00:09:19.072 } 00:09:19.072 } 00:09:19.072 ] 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "subsystem": "vmd", 00:09:19.072 "config": [] 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "subsystem": "accel", 00:09:19.072 "config": [ 00:09:19.072 { 00:09:19.072 "method": "accel_set_options", 00:09:19.072 "params": { 00:09:19.072 "small_cache_size": 128, 00:09:19.072 "large_cache_size": 16, 00:09:19.072 "task_count": 2048, 00:09:19.072 "sequence_count": 2048, 00:09:19.072 "buf_count": 2048 00:09:19.072 } 00:09:19.072 } 00:09:19.072 ] 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "subsystem": "bdev", 00:09:19.072 "config": [ 00:09:19.072 { 00:09:19.073 "method": "bdev_set_options", 00:09:19.073 "params": { 00:09:19.073 "bdev_io_pool_size": 65535, 00:09:19.073 "bdev_io_cache_size": 256, 00:09:19.073 "bdev_auto_examine": true, 00:09:19.073 "iobuf_small_cache_size": 128, 00:09:19.073 "iobuf_large_cache_size": 16 00:09:19.073 } 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "method": "bdev_raid_set_options", 00:09:19.073 "params": { 00:09:19.073 "process_window_size_kb": 1024, 00:09:19.073 "process_max_bandwidth_mb_sec": 0 00:09:19.073 } 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "method": "bdev_iscsi_set_options", 00:09:19.073 "params": { 00:09:19.073 "timeout_sec": 30 00:09:19.073 } 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "method": "bdev_nvme_set_options", 00:09:19.073 "params": { 00:09:19.073 "action_on_timeout": "none", 00:09:19.073 "timeout_us": 0, 00:09:19.073 "timeout_admin_us": 0, 00:09:19.073 "keep_alive_timeout_ms": 10000, 00:09:19.073 "arbitration_burst": 0, 00:09:19.073 "low_priority_weight": 0, 00:09:19.073 "medium_priority_weight": 0, 00:09:19.073 "high_priority_weight": 0, 00:09:19.073 "nvme_adminq_poll_period_us": 10000, 00:09:19.073 "nvme_ioq_poll_period_us": 0, 00:09:19.073 "io_queue_requests": 0, 00:09:19.073 "delay_cmd_submit": true, 00:09:19.073 "transport_retry_count": 4, 00:09:19.073 "bdev_retry_count": 3, 00:09:19.073 "transport_ack_timeout": 0, 00:09:19.073 "ctrlr_loss_timeout_sec": 0, 00:09:19.073 "reconnect_delay_sec": 0, 00:09:19.073 "fast_io_fail_timeout_sec": 0, 00:09:19.073 "disable_auto_failback": false, 00:09:19.073 "generate_uuids": false, 00:09:19.073 "transport_tos": 0, 00:09:19.073 "nvme_error_stat": false, 00:09:19.073 "rdma_srq_size": 0, 00:09:19.073 "io_path_stat": false, 00:09:19.073 "allow_accel_sequence": false, 00:09:19.073 "rdma_max_cq_size": 0, 00:09:19.073 "rdma_cm_event_timeout_ms": 0, 00:09:19.073 "dhchap_digests": [ 00:09:19.073 "sha256", 00:09:19.073 "sha384", 00:09:19.073 "sha512" 00:09:19.073 ], 00:09:19.073 "dhchap_dhgroups": [ 00:09:19.073 "null", 00:09:19.073 "ffdhe2048", 00:09:19.073 "ffdhe3072", 00:09:19.073 "ffdhe4096", 00:09:19.073 "ffdhe6144", 00:09:19.073 "ffdhe8192" 00:09:19.073 ] 00:09:19.073 } 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "method": "bdev_nvme_set_hotplug", 00:09:19.073 "params": { 00:09:19.073 "period_us": 100000, 00:09:19.073 "enable": false 00:09:19.073 } 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "method": "bdev_wait_for_examine" 00:09:19.073 } 00:09:19.073 ] 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "subsystem": "scsi", 00:09:19.073 "config": null 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "subsystem": "scheduler", 00:09:19.073 "config": [ 00:09:19.073 { 00:09:19.073 "method": "framework_set_scheduler", 00:09:19.073 "params": { 00:09:19.073 "name": "static" 00:09:19.073 } 00:09:19.073 } 00:09:19.073 ] 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "subsystem": "vhost_scsi", 00:09:19.073 "config": [] 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "subsystem": "vhost_blk", 00:09:19.073 "config": [] 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "subsystem": "ublk", 00:09:19.073 "config": [] 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "subsystem": "nbd", 00:09:19.073 "config": [] 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "subsystem": "nvmf", 00:09:19.073 "config": [ 00:09:19.073 { 00:09:19.073 "method": "nvmf_set_config", 00:09:19.073 "params": { 00:09:19.073 "discovery_filter": "match_any", 00:09:19.073 "admin_cmd_passthru": { 00:09:19.073 "identify_ctrlr": false 00:09:19.073 }, 00:09:19.073 "dhchap_digests": [ 00:09:19.073 "sha256", 00:09:19.073 "sha384", 00:09:19.073 "sha512" 00:09:19.073 ], 00:09:19.073 "dhchap_dhgroups": [ 00:09:19.073 "null", 00:09:19.073 "ffdhe2048", 00:09:19.073 "ffdhe3072", 00:09:19.073 "ffdhe4096", 00:09:19.073 "ffdhe6144", 00:09:19.073 "ffdhe8192" 00:09:19.073 ] 00:09:19.073 } 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "method": "nvmf_set_max_subsystems", 00:09:19.073 "params": { 00:09:19.073 "max_subsystems": 1024 00:09:19.073 } 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "method": "nvmf_set_crdt", 00:09:19.073 "params": { 00:09:19.073 "crdt1": 0, 00:09:19.073 "crdt2": 0, 00:09:19.073 "crdt3": 0 00:09:19.073 } 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "method": "nvmf_create_transport", 00:09:19.073 "params": { 00:09:19.073 "trtype": "TCP", 00:09:19.073 "max_queue_depth": 128, 00:09:19.073 "max_io_qpairs_per_ctrlr": 127, 00:09:19.073 "in_capsule_data_size": 4096, 00:09:19.073 "max_io_size": 131072, 00:09:19.073 "io_unit_size": 131072, 00:09:19.073 "max_aq_depth": 128, 00:09:19.073 "num_shared_buffers": 511, 00:09:19.073 "buf_cache_size": 4294967295, 00:09:19.073 "dif_insert_or_strip": false, 00:09:19.073 "zcopy": false, 00:09:19.073 "c2h_success": true, 00:09:19.073 "sock_priority": 0, 00:09:19.073 "abort_timeout_sec": 1, 00:09:19.073 "ack_timeout": 0, 00:09:19.073 "data_wr_pool_size": 0 00:09:19.073 } 00:09:19.073 } 00:09:19.073 ] 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "subsystem": "iscsi", 00:09:19.073 "config": [ 00:09:19.073 { 00:09:19.073 "method": "iscsi_set_options", 00:09:19.073 "params": { 00:09:19.073 "node_base": "iqn.2016-06.io.spdk", 00:09:19.073 "max_sessions": 128, 00:09:19.073 "max_connections_per_session": 2, 00:09:19.073 "max_queue_depth": 64, 00:09:19.073 "default_time2wait": 2, 00:09:19.073 "default_time2retain": 20, 00:09:19.073 "first_burst_length": 8192, 00:09:19.073 "immediate_data": true, 00:09:19.073 "allow_duplicated_isid": false, 00:09:19.073 "error_recovery_level": 0, 00:09:19.073 "nop_timeout": 60, 00:09:19.073 "nop_in_interval": 30, 00:09:19.073 "disable_chap": false, 00:09:19.073 "require_chap": false, 00:09:19.073 "mutual_chap": false, 00:09:19.073 "chap_group": 0, 00:09:19.073 "max_large_datain_per_connection": 64, 00:09:19.073 "max_r2t_per_connection": 4, 00:09:19.073 "pdu_pool_size": 36864, 00:09:19.073 "immediate_data_pool_size": 16384, 00:09:19.073 "data_out_pool_size": 2048 00:09:19.073 } 00:09:19.073 } 00:09:19.073 ] 00:09:19.073 } 00:09:19.073 ] 00:09:19.073 } 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57308 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57308 ']' 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57308 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57308 00:09:19.073 killing process with pid 57308 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57308' 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57308 00:09:19.073 04:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57308 00:09:21.605 04:31:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57364 00:09:21.605 04:31:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:21.605 04:31:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57364 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57364 ']' 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57364 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57364 00:09:26.872 killing process with pid 57364 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57364' 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57364 00:09:26.872 04:31:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57364 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:28.773 ************************************ 00:09:28.773 END TEST skip_rpc_with_json 00:09:28.773 ************************************ 00:09:28.773 00:09:28.773 real 0m11.098s 00:09:28.773 user 0m10.438s 00:09:28.773 sys 0m1.027s 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:28.773 04:31:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:28.773 04:31:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.773 04:31:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.773 04:31:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.773 ************************************ 00:09:28.773 START TEST skip_rpc_with_delay 00:09:28.773 ************************************ 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.773 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.774 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:28.774 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:28.774 [2024-11-27 04:31:16.269846] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:28.774 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:28.774 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.774 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.774 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.774 00:09:28.774 real 0m0.210s 00:09:28.774 user 0m0.117s 00:09:28.774 sys 0m0.091s 00:09:28.774 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.774 04:31:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:28.774 ************************************ 00:09:28.774 END TEST skip_rpc_with_delay 00:09:28.774 ************************************ 00:09:28.774 04:31:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:28.774 04:31:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:28.774 04:31:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:28.774 04:31:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.774 04:31:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.774 04:31:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.032 ************************************ 00:09:29.032 START TEST exit_on_failed_rpc_init 00:09:29.032 ************************************ 00:09:29.032 04:31:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:29.032 04:31:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57492 00:09:29.032 04:31:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57492 00:09:29.032 04:31:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57492 ']' 00:09:29.032 04:31:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:29.032 04:31:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.032 04:31:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.032 04:31:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.032 04:31:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.032 04:31:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:29.032 [2024-11-27 04:31:16.533885] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:29.032 [2024-11-27 04:31:16.534057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57492 ] 00:09:29.324 [2024-11-27 04:31:16.717488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.324 [2024-11-27 04:31:16.854514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:30.260 04:31:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:30.260 [2024-11-27 04:31:17.865002] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:30.260 [2024-11-27 04:31:17.865190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57515 ] 00:09:30.518 [2024-11-27 04:31:18.052658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.777 [2024-11-27 04:31:18.186593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.777 [2024-11-27 04:31:18.186730] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:30.777 [2024-11-27 04:31:18.186753] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:30.777 [2024-11-27 04:31:18.186784] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57492 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57492 ']' 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57492 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57492 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.036 killing process with pid 57492 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57492' 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57492 00:09:31.036 04:31:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57492 00:09:33.567 00:09:33.567 real 0m4.378s 00:09:33.567 user 0m4.820s 00:09:33.567 sys 0m0.672s 00:09:33.567 04:31:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.567 04:31:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:33.567 ************************************ 00:09:33.567 END TEST exit_on_failed_rpc_init 00:09:33.567 ************************************ 00:09:33.567 04:31:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:33.567 00:09:33.567 real 0m23.357s 00:09:33.567 user 0m22.236s 00:09:33.567 sys 0m2.464s 00:09:33.567 04:31:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.567 04:31:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.567 ************************************ 00:09:33.567 END TEST skip_rpc 00:09:33.567 ************************************ 00:09:33.567 04:31:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:33.567 04:31:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.567 04:31:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.567 04:31:20 -- common/autotest_common.sh@10 -- # set +x 00:09:33.567 ************************************ 00:09:33.567 START TEST rpc_client 00:09:33.567 ************************************ 00:09:33.567 04:31:20 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:33.567 * Looking for test storage... 00:09:33.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:33.567 04:31:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.568 04:31:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.568 04:31:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.568 04:31:21 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.568 04:31:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:33.568 04:31:21 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.568 04:31:21 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.568 --rc genhtml_branch_coverage=1 00:09:33.568 --rc genhtml_function_coverage=1 00:09:33.568 --rc genhtml_legend=1 00:09:33.568 --rc geninfo_all_blocks=1 00:09:33.568 --rc geninfo_unexecuted_blocks=1 00:09:33.568 00:09:33.568 ' 00:09:33.568 04:31:21 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.568 --rc genhtml_branch_coverage=1 00:09:33.568 --rc genhtml_function_coverage=1 00:09:33.568 --rc genhtml_legend=1 00:09:33.568 --rc geninfo_all_blocks=1 00:09:33.568 --rc geninfo_unexecuted_blocks=1 00:09:33.568 00:09:33.568 ' 00:09:33.568 04:31:21 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.568 --rc genhtml_branch_coverage=1 00:09:33.568 --rc genhtml_function_coverage=1 00:09:33.568 --rc genhtml_legend=1 00:09:33.568 --rc geninfo_all_blocks=1 00:09:33.568 --rc geninfo_unexecuted_blocks=1 00:09:33.568 00:09:33.568 ' 00:09:33.568 04:31:21 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.568 --rc genhtml_branch_coverage=1 00:09:33.568 --rc genhtml_function_coverage=1 00:09:33.568 --rc genhtml_legend=1 00:09:33.568 --rc geninfo_all_blocks=1 00:09:33.568 --rc geninfo_unexecuted_blocks=1 00:09:33.568 00:09:33.568 ' 00:09:33.568 04:31:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:33.568 OK 00:09:33.568 04:31:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:33.568 00:09:33.568 real 0m0.263s 00:09:33.568 user 0m0.164s 00:09:33.568 sys 0m0.106s 00:09:33.568 04:31:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.568 ************************************ 00:09:33.568 END TEST rpc_client 00:09:33.568 ************************************ 00:09:33.568 04:31:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:33.568 04:31:21 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:33.568 04:31:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.568 04:31:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.568 04:31:21 -- common/autotest_common.sh@10 -- # set +x 00:09:33.568 ************************************ 00:09:33.568 START TEST json_config 00:09:33.568 ************************************ 00:09:33.568 04:31:21 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:33.827 04:31:21 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.827 04:31:21 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.827 04:31:21 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.827 04:31:21 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.827 04:31:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.827 04:31:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.827 04:31:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.827 04:31:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.827 04:31:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.827 04:31:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.827 04:31:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.827 04:31:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.827 04:31:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.827 04:31:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.828 04:31:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.828 04:31:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:33.828 04:31:21 json_config -- scripts/common.sh@345 -- # : 1 00:09:33.828 04:31:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.828 04:31:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.828 04:31:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:33.828 04:31:21 json_config -- scripts/common.sh@353 -- # local d=1 00:09:33.828 04:31:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.828 04:31:21 json_config -- scripts/common.sh@355 -- # echo 1 00:09:33.828 04:31:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.828 04:31:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:33.828 04:31:21 json_config -- scripts/common.sh@353 -- # local d=2 00:09:33.828 04:31:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.828 04:31:21 json_config -- scripts/common.sh@355 -- # echo 2 00:09:33.828 04:31:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.828 04:31:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.828 04:31:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.828 04:31:21 json_config -- scripts/common.sh@368 -- # return 0 00:09:33.828 04:31:21 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.828 04:31:21 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.828 --rc genhtml_branch_coverage=1 00:09:33.828 --rc genhtml_function_coverage=1 00:09:33.828 --rc genhtml_legend=1 00:09:33.828 --rc geninfo_all_blocks=1 00:09:33.828 --rc geninfo_unexecuted_blocks=1 00:09:33.828 00:09:33.828 ' 00:09:33.828 04:31:21 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.828 --rc genhtml_branch_coverage=1 00:09:33.828 --rc genhtml_function_coverage=1 00:09:33.828 --rc genhtml_legend=1 00:09:33.828 --rc geninfo_all_blocks=1 00:09:33.828 --rc geninfo_unexecuted_blocks=1 00:09:33.828 00:09:33.828 ' 00:09:33.828 04:31:21 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.828 --rc genhtml_branch_coverage=1 00:09:33.828 --rc genhtml_function_coverage=1 00:09:33.828 --rc genhtml_legend=1 00:09:33.828 --rc geninfo_all_blocks=1 00:09:33.828 --rc geninfo_unexecuted_blocks=1 00:09:33.828 00:09:33.828 ' 00:09:33.828 04:31:21 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.828 --rc genhtml_branch_coverage=1 00:09:33.828 --rc genhtml_function_coverage=1 00:09:33.828 --rc genhtml_legend=1 00:09:33.828 --rc geninfo_all_blocks=1 00:09:33.828 --rc geninfo_unexecuted_blocks=1 00:09:33.828 00:09:33.828 ' 00:09:33.828 04:31:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b590e78-c7c2-47b8-8d4e-2e32c1bfd2eb 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7b590e78-c7c2-47b8-8d4e-2e32c1bfd2eb 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.828 04:31:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.828 04:31:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.828 04:31:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.828 04:31:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.828 04:31:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.828 04:31:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.828 04:31:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.828 04:31:21 json_config -- paths/export.sh@5 -- # export PATH 00:09:33.828 04:31:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@51 -- # : 0 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.828 04:31:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.828 04:31:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:33.828 04:31:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:33.828 04:31:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:33.828 04:31:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:33.828 04:31:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:33.828 WARNING: No tests are enabled so not running JSON configuration tests 00:09:33.828 04:31:21 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:33.828 04:31:21 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:33.828 00:09:33.828 real 0m0.174s 00:09:33.828 user 0m0.115s 00:09:33.828 sys 0m0.065s 00:09:33.828 04:31:21 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.828 04:31:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:33.828 ************************************ 00:09:33.828 END TEST json_config 00:09:33.828 ************************************ 00:09:33.828 04:31:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:33.828 04:31:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.828 04:31:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.828 04:31:21 -- common/autotest_common.sh@10 -- # set +x 00:09:33.828 ************************************ 00:09:33.828 START TEST json_config_extra_key 00:09:33.828 ************************************ 00:09:33.828 04:31:21 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:34.088 04:31:21 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.088 04:31:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.088 04:31:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.088 04:31:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:34.088 04:31:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.089 04:31:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:34.089 04:31:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.089 04:31:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.089 04:31:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.089 04:31:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.089 --rc genhtml_branch_coverage=1 00:09:34.089 --rc genhtml_function_coverage=1 00:09:34.089 --rc genhtml_legend=1 00:09:34.089 --rc geninfo_all_blocks=1 00:09:34.089 --rc geninfo_unexecuted_blocks=1 00:09:34.089 00:09:34.089 ' 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.089 --rc genhtml_branch_coverage=1 00:09:34.089 --rc genhtml_function_coverage=1 00:09:34.089 --rc genhtml_legend=1 00:09:34.089 --rc geninfo_all_blocks=1 00:09:34.089 --rc geninfo_unexecuted_blocks=1 00:09:34.089 00:09:34.089 ' 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.089 --rc genhtml_branch_coverage=1 00:09:34.089 --rc genhtml_function_coverage=1 00:09:34.089 --rc genhtml_legend=1 00:09:34.089 --rc geninfo_all_blocks=1 00:09:34.089 --rc geninfo_unexecuted_blocks=1 00:09:34.089 00:09:34.089 ' 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.089 --rc genhtml_branch_coverage=1 00:09:34.089 --rc genhtml_function_coverage=1 00:09:34.089 --rc genhtml_legend=1 00:09:34.089 --rc geninfo_all_blocks=1 00:09:34.089 --rc geninfo_unexecuted_blocks=1 00:09:34.089 00:09:34.089 ' 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b590e78-c7c2-47b8-8d4e-2e32c1bfd2eb 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7b590e78-c7c2-47b8-8d4e-2e32c1bfd2eb 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.089 04:31:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.089 04:31:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.089 04:31:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.089 04:31:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.089 04:31:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.089 04:31:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.089 04:31:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.089 04:31:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:34.089 04:31:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.089 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.089 04:31:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:34.089 INFO: launching applications... 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:34.089 04:31:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57720 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:34.089 Waiting for target to run... 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:34.089 04:31:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57720 /var/tmp/spdk_tgt.sock 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57720 ']' 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.089 04:31:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:34.348 [2024-11-27 04:31:21.723009] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:34.348 [2024-11-27 04:31:21.723169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57720 ] 00:09:34.606 [2024-11-27 04:31:22.193821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.864 [2024-11-27 04:31:22.340031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.428 04:31:23 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.686 00:09:35.686 04:31:23 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:35.686 04:31:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:35.686 INFO: shutting down applications... 00:09:35.686 04:31:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:35.686 04:31:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:35.686 04:31:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:35.686 04:31:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:35.686 04:31:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57720 ]] 00:09:35.686 04:31:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57720 00:09:35.686 04:31:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:35.686 04:31:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:35.686 04:31:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57720 00:09:35.686 04:31:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:35.944 04:31:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:35.944 04:31:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:35.944 04:31:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57720 00:09:35.944 04:31:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:36.511 04:31:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:36.511 04:31:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:36.511 04:31:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57720 00:09:36.511 04:31:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:37.078 04:31:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:37.078 04:31:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:37.078 04:31:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57720 00:09:37.078 04:31:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:37.653 04:31:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:37.653 04:31:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:37.653 04:31:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57720 00:09:37.653 04:31:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:38.235 04:31:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:38.235 04:31:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:38.235 04:31:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57720 00:09:38.235 04:31:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:38.494 04:31:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:38.495 04:31:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:38.495 04:31:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57720 00:09:38.495 04:31:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:38.495 04:31:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:38.495 04:31:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:38.495 SPDK target shutdown done 00:09:38.495 04:31:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:38.495 Success 00:09:38.495 04:31:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:38.495 00:09:38.495 real 0m4.681s 00:09:38.495 user 0m4.140s 00:09:38.495 sys 0m0.654s 00:09:38.495 04:31:26 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.495 ************************************ 00:09:38.495 END TEST json_config_extra_key 00:09:38.495 ************************************ 00:09:38.495 04:31:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:38.753 04:31:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:38.753 04:31:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.753 04:31:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.754 04:31:26 -- common/autotest_common.sh@10 -- # set +x 00:09:38.754 ************************************ 00:09:38.754 START TEST alias_rpc 00:09:38.754 ************************************ 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:38.754 * Looking for test storage... 00:09:38.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.754 04:31:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.754 --rc genhtml_branch_coverage=1 00:09:38.754 --rc genhtml_function_coverage=1 00:09:38.754 --rc genhtml_legend=1 00:09:38.754 --rc geninfo_all_blocks=1 00:09:38.754 --rc geninfo_unexecuted_blocks=1 00:09:38.754 00:09:38.754 ' 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.754 --rc genhtml_branch_coverage=1 00:09:38.754 --rc genhtml_function_coverage=1 00:09:38.754 --rc genhtml_legend=1 00:09:38.754 --rc geninfo_all_blocks=1 00:09:38.754 --rc geninfo_unexecuted_blocks=1 00:09:38.754 00:09:38.754 ' 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.754 --rc genhtml_branch_coverage=1 00:09:38.754 --rc genhtml_function_coverage=1 00:09:38.754 --rc genhtml_legend=1 00:09:38.754 --rc geninfo_all_blocks=1 00:09:38.754 --rc geninfo_unexecuted_blocks=1 00:09:38.754 00:09:38.754 ' 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.754 --rc genhtml_branch_coverage=1 00:09:38.754 --rc genhtml_function_coverage=1 00:09:38.754 --rc genhtml_legend=1 00:09:38.754 --rc geninfo_all_blocks=1 00:09:38.754 --rc geninfo_unexecuted_blocks=1 00:09:38.754 00:09:38.754 ' 00:09:38.754 04:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:38.754 04:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57837 00:09:38.754 04:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57837 00:09:38.754 04:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57837 ']' 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.754 04:31:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.013 [2024-11-27 04:31:26.453397] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:39.013 [2024-11-27 04:31:26.453870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57837 ] 00:09:39.272 [2024-11-27 04:31:26.641809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.272 [2024-11-27 04:31:26.806931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.207 04:31:27 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.207 04:31:27 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:40.207 04:31:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:40.465 04:31:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57837 00:09:40.465 04:31:28 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57837 ']' 00:09:40.465 04:31:28 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57837 00:09:40.465 04:31:28 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:40.465 04:31:28 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.465 04:31:28 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57837 00:09:40.465 killing process with pid 57837 00:09:40.465 04:31:28 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.465 04:31:28 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.465 04:31:28 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57837' 00:09:40.465 04:31:28 alias_rpc -- common/autotest_common.sh@973 -- # kill 57837 00:09:40.465 04:31:28 alias_rpc -- common/autotest_common.sh@978 -- # wait 57837 00:09:42.995 00:09:42.995 real 0m4.254s 00:09:42.995 user 0m4.407s 00:09:42.995 sys 0m0.643s 00:09:42.995 04:31:30 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.995 04:31:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.995 ************************************ 00:09:42.995 END TEST alias_rpc 00:09:42.995 ************************************ 00:09:42.995 04:31:30 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:42.995 04:31:30 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:42.996 04:31:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.996 04:31:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.996 04:31:30 -- common/autotest_common.sh@10 -- # set +x 00:09:42.996 ************************************ 00:09:42.996 START TEST spdkcli_tcp 00:09:42.996 ************************************ 00:09:42.996 04:31:30 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:42.996 * Looking for test storage... 00:09:42.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:42.996 04:31:30 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.996 04:31:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.996 04:31:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.996 04:31:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.996 04:31:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:43.254 04:31:30 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.254 04:31:30 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.254 04:31:30 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.254 04:31:30 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:43.254 04:31:30 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.254 04:31:30 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.254 --rc genhtml_branch_coverage=1 00:09:43.254 --rc genhtml_function_coverage=1 00:09:43.254 --rc genhtml_legend=1 00:09:43.254 --rc geninfo_all_blocks=1 00:09:43.254 --rc geninfo_unexecuted_blocks=1 00:09:43.254 00:09:43.254 ' 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.255 --rc genhtml_branch_coverage=1 00:09:43.255 --rc genhtml_function_coverage=1 00:09:43.255 --rc genhtml_legend=1 00:09:43.255 --rc geninfo_all_blocks=1 00:09:43.255 --rc geninfo_unexecuted_blocks=1 00:09:43.255 00:09:43.255 ' 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.255 --rc genhtml_branch_coverage=1 00:09:43.255 --rc genhtml_function_coverage=1 00:09:43.255 --rc genhtml_legend=1 00:09:43.255 --rc geninfo_all_blocks=1 00:09:43.255 --rc geninfo_unexecuted_blocks=1 00:09:43.255 00:09:43.255 ' 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.255 --rc genhtml_branch_coverage=1 00:09:43.255 --rc genhtml_function_coverage=1 00:09:43.255 --rc genhtml_legend=1 00:09:43.255 --rc geninfo_all_blocks=1 00:09:43.255 --rc geninfo_unexecuted_blocks=1 00:09:43.255 00:09:43.255 ' 00:09:43.255 04:31:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:43.255 04:31:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:43.255 04:31:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:43.255 04:31:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:43.255 04:31:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:43.255 04:31:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:43.255 04:31:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.255 04:31:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57944 00:09:43.255 04:31:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57944 00:09:43.255 04:31:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57944 ']' 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.255 04:31:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.255 [2024-11-27 04:31:30.753546] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:43.255 [2024-11-27 04:31:30.753754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57944 ] 00:09:43.513 [2024-11-27 04:31:30.946062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.513 [2024-11-27 04:31:31.113448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.513 [2024-11-27 04:31:31.113458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.448 04:31:31 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.448 04:31:31 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:44.448 04:31:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:44.448 04:31:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57961 00:09:44.448 04:31:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:44.706 [ 00:09:44.706 "bdev_malloc_delete", 00:09:44.706 "bdev_malloc_create", 00:09:44.706 "bdev_null_resize", 00:09:44.706 "bdev_null_delete", 00:09:44.706 "bdev_null_create", 00:09:44.706 "bdev_nvme_cuse_unregister", 00:09:44.706 "bdev_nvme_cuse_register", 00:09:44.706 "bdev_opal_new_user", 00:09:44.706 "bdev_opal_set_lock_state", 00:09:44.706 "bdev_opal_delete", 00:09:44.706 "bdev_opal_get_info", 00:09:44.706 "bdev_opal_create", 00:09:44.706 "bdev_nvme_opal_revert", 00:09:44.707 "bdev_nvme_opal_init", 00:09:44.707 "bdev_nvme_send_cmd", 00:09:44.707 "bdev_nvme_set_keys", 00:09:44.707 "bdev_nvme_get_path_iostat", 00:09:44.707 "bdev_nvme_get_mdns_discovery_info", 00:09:44.707 "bdev_nvme_stop_mdns_discovery", 00:09:44.707 "bdev_nvme_start_mdns_discovery", 00:09:44.707 "bdev_nvme_set_multipath_policy", 00:09:44.707 "bdev_nvme_set_preferred_path", 00:09:44.707 "bdev_nvme_get_io_paths", 00:09:44.707 "bdev_nvme_remove_error_injection", 00:09:44.707 "bdev_nvme_add_error_injection", 00:09:44.707 "bdev_nvme_get_discovery_info", 00:09:44.707 "bdev_nvme_stop_discovery", 00:09:44.707 "bdev_nvme_start_discovery", 00:09:44.707 "bdev_nvme_get_controller_health_info", 00:09:44.707 "bdev_nvme_disable_controller", 00:09:44.707 "bdev_nvme_enable_controller", 00:09:44.707 "bdev_nvme_reset_controller", 00:09:44.707 "bdev_nvme_get_transport_statistics", 00:09:44.707 "bdev_nvme_apply_firmware", 00:09:44.707 "bdev_nvme_detach_controller", 00:09:44.707 "bdev_nvme_get_controllers", 00:09:44.707 "bdev_nvme_attach_controller", 00:09:44.707 "bdev_nvme_set_hotplug", 00:09:44.707 "bdev_nvme_set_options", 00:09:44.707 "bdev_passthru_delete", 00:09:44.707 "bdev_passthru_create", 00:09:44.707 "bdev_lvol_set_parent_bdev", 00:09:44.707 "bdev_lvol_set_parent", 00:09:44.707 "bdev_lvol_check_shallow_copy", 00:09:44.707 "bdev_lvol_start_shallow_copy", 00:09:44.707 "bdev_lvol_grow_lvstore", 00:09:44.707 "bdev_lvol_get_lvols", 00:09:44.707 "bdev_lvol_get_lvstores", 00:09:44.707 "bdev_lvol_delete", 00:09:44.707 "bdev_lvol_set_read_only", 00:09:44.707 "bdev_lvol_resize", 00:09:44.707 "bdev_lvol_decouple_parent", 00:09:44.707 "bdev_lvol_inflate", 00:09:44.707 "bdev_lvol_rename", 00:09:44.707 "bdev_lvol_clone_bdev", 00:09:44.707 "bdev_lvol_clone", 00:09:44.707 "bdev_lvol_snapshot", 00:09:44.707 "bdev_lvol_create", 00:09:44.707 "bdev_lvol_delete_lvstore", 00:09:44.707 "bdev_lvol_rename_lvstore", 00:09:44.707 "bdev_lvol_create_lvstore", 00:09:44.707 "bdev_raid_set_options", 00:09:44.707 "bdev_raid_remove_base_bdev", 00:09:44.707 "bdev_raid_add_base_bdev", 00:09:44.707 "bdev_raid_delete", 00:09:44.707 "bdev_raid_create", 00:09:44.707 "bdev_raid_get_bdevs", 00:09:44.707 "bdev_error_inject_error", 00:09:44.707 "bdev_error_delete", 00:09:44.707 "bdev_error_create", 00:09:44.707 "bdev_split_delete", 00:09:44.707 "bdev_split_create", 00:09:44.707 "bdev_delay_delete", 00:09:44.707 "bdev_delay_create", 00:09:44.707 "bdev_delay_update_latency", 00:09:44.707 "bdev_zone_block_delete", 00:09:44.707 "bdev_zone_block_create", 00:09:44.707 "blobfs_create", 00:09:44.707 "blobfs_detect", 00:09:44.707 "blobfs_set_cache_size", 00:09:44.707 "bdev_aio_delete", 00:09:44.707 "bdev_aio_rescan", 00:09:44.707 "bdev_aio_create", 00:09:44.707 "bdev_ftl_set_property", 00:09:44.707 "bdev_ftl_get_properties", 00:09:44.707 "bdev_ftl_get_stats", 00:09:44.707 "bdev_ftl_unmap", 00:09:44.707 "bdev_ftl_unload", 00:09:44.707 "bdev_ftl_delete", 00:09:44.707 "bdev_ftl_load", 00:09:44.707 "bdev_ftl_create", 00:09:44.707 "bdev_virtio_attach_controller", 00:09:44.707 "bdev_virtio_scsi_get_devices", 00:09:44.707 "bdev_virtio_detach_controller", 00:09:44.707 "bdev_virtio_blk_set_hotplug", 00:09:44.707 "bdev_iscsi_delete", 00:09:44.707 "bdev_iscsi_create", 00:09:44.707 "bdev_iscsi_set_options", 00:09:44.707 "accel_error_inject_error", 00:09:44.707 "ioat_scan_accel_module", 00:09:44.707 "dsa_scan_accel_module", 00:09:44.707 "iaa_scan_accel_module", 00:09:44.707 "keyring_file_remove_key", 00:09:44.707 "keyring_file_add_key", 00:09:44.707 "keyring_linux_set_options", 00:09:44.707 "fsdev_aio_delete", 00:09:44.707 "fsdev_aio_create", 00:09:44.707 "iscsi_get_histogram", 00:09:44.707 "iscsi_enable_histogram", 00:09:44.707 "iscsi_set_options", 00:09:44.707 "iscsi_get_auth_groups", 00:09:44.707 "iscsi_auth_group_remove_secret", 00:09:44.707 "iscsi_auth_group_add_secret", 00:09:44.707 "iscsi_delete_auth_group", 00:09:44.707 "iscsi_create_auth_group", 00:09:44.707 "iscsi_set_discovery_auth", 00:09:44.707 "iscsi_get_options", 00:09:44.707 "iscsi_target_node_request_logout", 00:09:44.707 "iscsi_target_node_set_redirect", 00:09:44.707 "iscsi_target_node_set_auth", 00:09:44.707 "iscsi_target_node_add_lun", 00:09:44.707 "iscsi_get_stats", 00:09:44.707 "iscsi_get_connections", 00:09:44.707 "iscsi_portal_group_set_auth", 00:09:44.707 "iscsi_start_portal_group", 00:09:44.707 "iscsi_delete_portal_group", 00:09:44.707 "iscsi_create_portal_group", 00:09:44.707 "iscsi_get_portal_groups", 00:09:44.707 "iscsi_delete_target_node", 00:09:44.707 "iscsi_target_node_remove_pg_ig_maps", 00:09:44.707 "iscsi_target_node_add_pg_ig_maps", 00:09:44.707 "iscsi_create_target_node", 00:09:44.707 "iscsi_get_target_nodes", 00:09:44.707 "iscsi_delete_initiator_group", 00:09:44.707 "iscsi_initiator_group_remove_initiators", 00:09:44.707 "iscsi_initiator_group_add_initiators", 00:09:44.707 "iscsi_create_initiator_group", 00:09:44.707 "iscsi_get_initiator_groups", 00:09:44.707 "nvmf_set_crdt", 00:09:44.707 "nvmf_set_config", 00:09:44.707 "nvmf_set_max_subsystems", 00:09:44.707 "nvmf_stop_mdns_prr", 00:09:44.707 "nvmf_publish_mdns_prr", 00:09:44.707 "nvmf_subsystem_get_listeners", 00:09:44.707 "nvmf_subsystem_get_qpairs", 00:09:44.707 "nvmf_subsystem_get_controllers", 00:09:44.707 "nvmf_get_stats", 00:09:44.707 "nvmf_get_transports", 00:09:44.707 "nvmf_create_transport", 00:09:44.707 "nvmf_get_targets", 00:09:44.707 "nvmf_delete_target", 00:09:44.707 "nvmf_create_target", 00:09:44.707 "nvmf_subsystem_allow_any_host", 00:09:44.707 "nvmf_subsystem_set_keys", 00:09:44.707 "nvmf_subsystem_remove_host", 00:09:44.707 "nvmf_subsystem_add_host", 00:09:44.707 "nvmf_ns_remove_host", 00:09:44.707 "nvmf_ns_add_host", 00:09:44.707 "nvmf_subsystem_remove_ns", 00:09:44.707 "nvmf_subsystem_set_ns_ana_group", 00:09:44.707 "nvmf_subsystem_add_ns", 00:09:44.707 "nvmf_subsystem_listener_set_ana_state", 00:09:44.707 "nvmf_discovery_get_referrals", 00:09:44.707 "nvmf_discovery_remove_referral", 00:09:44.707 "nvmf_discovery_add_referral", 00:09:44.707 "nvmf_subsystem_remove_listener", 00:09:44.707 "nvmf_subsystem_add_listener", 00:09:44.707 "nvmf_delete_subsystem", 00:09:44.707 "nvmf_create_subsystem", 00:09:44.707 "nvmf_get_subsystems", 00:09:44.707 "env_dpdk_get_mem_stats", 00:09:44.708 "nbd_get_disks", 00:09:44.708 "nbd_stop_disk", 00:09:44.708 "nbd_start_disk", 00:09:44.708 "ublk_recover_disk", 00:09:44.708 "ublk_get_disks", 00:09:44.708 "ublk_stop_disk", 00:09:44.708 "ublk_start_disk", 00:09:44.708 "ublk_destroy_target", 00:09:44.708 "ublk_create_target", 00:09:44.708 "virtio_blk_create_transport", 00:09:44.708 "virtio_blk_get_transports", 00:09:44.708 "vhost_controller_set_coalescing", 00:09:44.708 "vhost_get_controllers", 00:09:44.708 "vhost_delete_controller", 00:09:44.708 "vhost_create_blk_controller", 00:09:44.708 "vhost_scsi_controller_remove_target", 00:09:44.708 "vhost_scsi_controller_add_target", 00:09:44.708 "vhost_start_scsi_controller", 00:09:44.708 "vhost_create_scsi_controller", 00:09:44.708 "thread_set_cpumask", 00:09:44.708 "scheduler_set_options", 00:09:44.708 "framework_get_governor", 00:09:44.708 "framework_get_scheduler", 00:09:44.708 "framework_set_scheduler", 00:09:44.708 "framework_get_reactors", 00:09:44.708 "thread_get_io_channels", 00:09:44.708 "thread_get_pollers", 00:09:44.708 "thread_get_stats", 00:09:44.708 "framework_monitor_context_switch", 00:09:44.708 "spdk_kill_instance", 00:09:44.708 "log_enable_timestamps", 00:09:44.708 "log_get_flags", 00:09:44.708 "log_clear_flag", 00:09:44.708 "log_set_flag", 00:09:44.708 "log_get_level", 00:09:44.708 "log_set_level", 00:09:44.708 "log_get_print_level", 00:09:44.708 "log_set_print_level", 00:09:44.708 "framework_enable_cpumask_locks", 00:09:44.708 "framework_disable_cpumask_locks", 00:09:44.708 "framework_wait_init", 00:09:44.708 "framework_start_init", 00:09:44.708 "scsi_get_devices", 00:09:44.708 "bdev_get_histogram", 00:09:44.708 "bdev_enable_histogram", 00:09:44.708 "bdev_set_qos_limit", 00:09:44.708 "bdev_set_qd_sampling_period", 00:09:44.708 "bdev_get_bdevs", 00:09:44.708 "bdev_reset_iostat", 00:09:44.708 "bdev_get_iostat", 00:09:44.708 "bdev_examine", 00:09:44.708 "bdev_wait_for_examine", 00:09:44.708 "bdev_set_options", 00:09:44.708 "accel_get_stats", 00:09:44.708 "accel_set_options", 00:09:44.708 "accel_set_driver", 00:09:44.708 "accel_crypto_key_destroy", 00:09:44.708 "accel_crypto_keys_get", 00:09:44.708 "accel_crypto_key_create", 00:09:44.708 "accel_assign_opc", 00:09:44.708 "accel_get_module_info", 00:09:44.708 "accel_get_opc_assignments", 00:09:44.708 "vmd_rescan", 00:09:44.708 "vmd_remove_device", 00:09:44.708 "vmd_enable", 00:09:44.708 "sock_get_default_impl", 00:09:44.708 "sock_set_default_impl", 00:09:44.708 "sock_impl_set_options", 00:09:44.708 "sock_impl_get_options", 00:09:44.708 "iobuf_get_stats", 00:09:44.708 "iobuf_set_options", 00:09:44.708 "keyring_get_keys", 00:09:44.708 "framework_get_pci_devices", 00:09:44.708 "framework_get_config", 00:09:44.708 "framework_get_subsystems", 00:09:44.708 "fsdev_set_opts", 00:09:44.708 "fsdev_get_opts", 00:09:44.708 "trace_get_info", 00:09:44.708 "trace_get_tpoint_group_mask", 00:09:44.708 "trace_disable_tpoint_group", 00:09:44.708 "trace_enable_tpoint_group", 00:09:44.708 "trace_clear_tpoint_mask", 00:09:44.708 "trace_set_tpoint_mask", 00:09:44.708 "notify_get_notifications", 00:09:44.708 "notify_get_types", 00:09:44.708 "spdk_get_version", 00:09:44.708 "rpc_get_methods" 00:09:44.708 ] 00:09:44.708 04:31:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:44.708 04:31:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.708 04:31:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.708 04:31:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:44.708 04:31:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57944 00:09:44.708 04:31:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57944 ']' 00:09:44.708 04:31:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57944 00:09:44.708 04:31:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:44.967 04:31:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.967 04:31:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57944 00:09:44.967 killing process with pid 57944 00:09:44.967 04:31:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.967 04:31:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.967 04:31:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57944' 00:09:44.967 04:31:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57944 00:09:44.967 04:31:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57944 00:09:47.502 ************************************ 00:09:47.502 END TEST spdkcli_tcp 00:09:47.502 ************************************ 00:09:47.502 00:09:47.502 real 0m4.149s 00:09:47.502 user 0m7.487s 00:09:47.502 sys 0m0.687s 00:09:47.502 04:31:34 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.502 04:31:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.502 04:31:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:47.502 04:31:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.502 04:31:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.502 04:31:34 -- common/autotest_common.sh@10 -- # set +x 00:09:47.502 ************************************ 00:09:47.502 START TEST dpdk_mem_utility 00:09:47.502 ************************************ 00:09:47.502 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:47.502 * Looking for test storage... 00:09:47.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:47.502 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.502 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.502 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.502 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.502 04:31:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:47.502 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.502 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.502 --rc genhtml_branch_coverage=1 00:09:47.502 --rc genhtml_function_coverage=1 00:09:47.502 --rc genhtml_legend=1 00:09:47.502 --rc geninfo_all_blocks=1 00:09:47.502 --rc geninfo_unexecuted_blocks=1 00:09:47.502 00:09:47.502 ' 00:09:47.502 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.502 --rc genhtml_branch_coverage=1 00:09:47.502 --rc genhtml_function_coverage=1 00:09:47.502 --rc genhtml_legend=1 00:09:47.502 --rc geninfo_all_blocks=1 00:09:47.502 --rc geninfo_unexecuted_blocks=1 00:09:47.502 00:09:47.502 ' 00:09:47.502 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.503 --rc genhtml_branch_coverage=1 00:09:47.503 --rc genhtml_function_coverage=1 00:09:47.503 --rc genhtml_legend=1 00:09:47.503 --rc geninfo_all_blocks=1 00:09:47.503 --rc geninfo_unexecuted_blocks=1 00:09:47.503 00:09:47.503 ' 00:09:47.503 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.503 --rc genhtml_branch_coverage=1 00:09:47.503 --rc genhtml_function_coverage=1 00:09:47.503 --rc genhtml_legend=1 00:09:47.503 --rc geninfo_all_blocks=1 00:09:47.503 --rc geninfo_unexecuted_blocks=1 00:09:47.503 00:09:47.503 ' 00:09:47.503 04:31:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:47.503 04:31:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58066 00:09:47.503 04:31:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58066 00:09:47.503 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58066 ']' 00:09:47.503 04:31:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:47.503 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.503 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.503 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.503 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.503 04:31:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:47.503 [2024-11-27 04:31:34.936562] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:47.503 [2024-11-27 04:31:34.936740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58066 ] 00:09:47.503 [2024-11-27 04:31:35.109036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.761 [2024-11-27 04:31:35.242081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.697 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.697 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:48.697 04:31:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:48.697 04:31:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:48.697 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.697 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:48.697 { 00:09:48.697 "filename": "/tmp/spdk_mem_dump.txt" 00:09:48.697 } 00:09:48.697 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.697 04:31:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:48.697 DPDK memory size 824.000000 MiB in 1 heap(s) 00:09:48.697 1 heaps totaling size 824.000000 MiB 00:09:48.697 size: 824.000000 MiB heap id: 0 00:09:48.697 end heaps---------- 00:09:48.698 9 mempools totaling size 603.782043 MiB 00:09:48.698 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:48.698 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:48.698 size: 100.555481 MiB name: bdev_io_58066 00:09:48.698 size: 50.003479 MiB name: msgpool_58066 00:09:48.698 size: 36.509338 MiB name: fsdev_io_58066 00:09:48.698 size: 21.763794 MiB name: PDU_Pool 00:09:48.698 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:48.698 size: 4.133484 MiB name: evtpool_58066 00:09:48.698 size: 0.026123 MiB name: Session_Pool 00:09:48.698 end mempools------- 00:09:48.698 6 memzones totaling size 4.142822 MiB 00:09:48.698 size: 1.000366 MiB name: RG_ring_0_58066 00:09:48.698 size: 1.000366 MiB name: RG_ring_1_58066 00:09:48.698 size: 1.000366 MiB name: RG_ring_4_58066 00:09:48.698 size: 1.000366 MiB name: RG_ring_5_58066 00:09:48.698 size: 0.125366 MiB name: RG_ring_2_58066 00:09:48.698 size: 0.015991 MiB name: RG_ring_3_58066 00:09:48.698 end memzones------- 00:09:48.698 04:31:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:48.698 heap id: 0 total size: 824.000000 MiB number of busy elements: 309 number of free elements: 18 00:09:48.698 list of free elements. size: 16.782837 MiB 00:09:48.698 element at address: 0x200006400000 with size: 1.995972 MiB 00:09:48.698 element at address: 0x20000a600000 with size: 1.995972 MiB 00:09:48.698 element at address: 0x200003e00000 with size: 1.991028 MiB 00:09:48.698 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:48.698 element at address: 0x200019900040 with size: 0.999939 MiB 00:09:48.698 element at address: 0x200019a00000 with size: 0.999084 MiB 00:09:48.698 element at address: 0x200032600000 with size: 0.994324 MiB 00:09:48.698 element at address: 0x200000400000 with size: 0.992004 MiB 00:09:48.698 element at address: 0x200019200000 with size: 0.959656 MiB 00:09:48.698 element at address: 0x200019d00040 with size: 0.936401 MiB 00:09:48.698 element at address: 0x200000200000 with size: 0.716980 MiB 00:09:48.698 element at address: 0x20001b400000 with size: 0.563660 MiB 00:09:48.698 element at address: 0x200000c00000 with size: 0.489197 MiB 00:09:48.698 element at address: 0x200019600000 with size: 0.488708 MiB 00:09:48.698 element at address: 0x200019e00000 with size: 0.485413 MiB 00:09:48.698 element at address: 0x200012c00000 with size: 0.433228 MiB 00:09:48.698 element at address: 0x200028800000 with size: 0.390442 MiB 00:09:48.698 element at address: 0x200000800000 with size: 0.350891 MiB 00:09:48.698 list of standard malloc elements. size: 199.286255 MiB 00:09:48.698 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:09:48.698 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:09:48.698 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:48.698 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:48.698 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:09:48.698 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:48.698 element at address: 0x200019deff40 with size: 0.062683 MiB 00:09:48.698 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:48.698 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:09:48.698 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:09:48.698 element at address: 0x200012bff040 with size: 0.000305 MiB 00:09:48.698 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:09:48.698 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200000cff000 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:09:48.698 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200012bff180 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200012bff280 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200012bff380 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200012bff480 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200012bff580 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200012bff680 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200012bff780 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200012bff880 with size: 0.000244 MiB 00:09:48.698 element at address: 0x200012bff980 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200019affc40 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200028863f40 with size: 0.000244 MiB 00:09:48.699 element at address: 0x200028864040 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886af80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886b080 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886b180 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886b280 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886b380 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886b480 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886b580 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886b680 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886b780 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886b880 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886b980 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886be80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886c080 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886c180 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886c280 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886c380 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886c480 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886c580 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886c680 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886c780 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886c880 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886c980 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886d080 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886d180 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886d280 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886d380 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886d480 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886d580 with size: 0.000244 MiB 00:09:48.699 element at address: 0x20002886d680 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886d780 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886d880 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886d980 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886da80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886db80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886de80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886df80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886e080 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886e180 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886e280 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886e380 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886e480 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886e580 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886e680 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886e780 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886e880 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886e980 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886f080 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886f180 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886f280 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886f380 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886f480 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886f580 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886f680 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886f780 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886f880 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886f980 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:09:48.700 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:09:48.700 list of memzone associated elements. size: 607.930908 MiB 00:09:48.700 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:09:48.700 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:48.700 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:09:48.700 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:48.700 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:09:48.700 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58066_0 00:09:48.700 element at address: 0x200000dff340 with size: 48.003113 MiB 00:09:48.700 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58066_0 00:09:48.700 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:09:48.700 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58066_0 00:09:48.700 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:09:48.700 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:48.700 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:09:48.700 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:48.700 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:09:48.700 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58066_0 00:09:48.700 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:09:48.700 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58066 00:09:48.700 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:48.700 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58066 00:09:48.700 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:09:48.700 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:48.700 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:09:48.700 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:48.700 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:48.700 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:48.700 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:09:48.700 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:48.700 element at address: 0x200000cff100 with size: 1.000549 MiB 00:09:48.700 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58066 00:09:48.700 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:09:48.700 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58066 00:09:48.700 element at address: 0x200019affd40 with size: 1.000549 MiB 00:09:48.700 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58066 00:09:48.700 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:09:48.700 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58066 00:09:48.700 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:09:48.700 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58066 00:09:48.700 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:09:48.700 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58066 00:09:48.700 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:09:48.700 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:48.700 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:09:48.700 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:48.700 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:09:48.700 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:48.700 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:09:48.700 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58066 00:09:48.700 element at address: 0x20000085df80 with size: 0.125549 MiB 00:09:48.700 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58066 00:09:48.700 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:09:48.700 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:48.700 element at address: 0x200028864140 with size: 0.023804 MiB 00:09:48.700 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:48.700 element at address: 0x200000859d40 with size: 0.016174 MiB 00:09:48.700 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58066 00:09:48.700 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:09:48.700 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:48.700 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:09:48.700 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58066 00:09:48.700 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:09:48.700 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58066 00:09:48.700 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:09:48.700 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58066 00:09:48.700 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:09:48.700 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:48.700 04:31:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:48.700 04:31:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58066 00:09:48.700 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58066 ']' 00:09:48.700 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58066 00:09:48.700 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:48.700 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.700 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58066 00:09:48.700 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.700 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.700 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58066' 00:09:48.700 killing process with pid 58066 00:09:48.700 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58066 00:09:48.700 04:31:36 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58066 00:09:51.232 00:09:51.232 real 0m3.875s 00:09:51.232 user 0m3.922s 00:09:51.232 sys 0m0.611s 00:09:51.232 04:31:38 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.232 04:31:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:51.232 ************************************ 00:09:51.232 END TEST dpdk_mem_utility 00:09:51.232 ************************************ 00:09:51.232 04:31:38 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:51.232 04:31:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.232 04:31:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.232 04:31:38 -- common/autotest_common.sh@10 -- # set +x 00:09:51.232 ************************************ 00:09:51.232 START TEST event 00:09:51.232 ************************************ 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:51.232 * Looking for test storage... 00:09:51.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.232 04:31:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.232 04:31:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.232 04:31:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.232 04:31:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.232 04:31:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.232 04:31:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.232 04:31:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.232 04:31:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.232 04:31:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.232 04:31:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.232 04:31:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.232 04:31:38 event -- scripts/common.sh@344 -- # case "$op" in 00:09:51.232 04:31:38 event -- scripts/common.sh@345 -- # : 1 00:09:51.232 04:31:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.232 04:31:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.232 04:31:38 event -- scripts/common.sh@365 -- # decimal 1 00:09:51.232 04:31:38 event -- scripts/common.sh@353 -- # local d=1 00:09:51.232 04:31:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.232 04:31:38 event -- scripts/common.sh@355 -- # echo 1 00:09:51.232 04:31:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.232 04:31:38 event -- scripts/common.sh@366 -- # decimal 2 00:09:51.232 04:31:38 event -- scripts/common.sh@353 -- # local d=2 00:09:51.232 04:31:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.232 04:31:38 event -- scripts/common.sh@355 -- # echo 2 00:09:51.232 04:31:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.232 04:31:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.232 04:31:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.232 04:31:38 event -- scripts/common.sh@368 -- # return 0 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.232 --rc genhtml_branch_coverage=1 00:09:51.232 --rc genhtml_function_coverage=1 00:09:51.232 --rc genhtml_legend=1 00:09:51.232 --rc geninfo_all_blocks=1 00:09:51.232 --rc geninfo_unexecuted_blocks=1 00:09:51.232 00:09:51.232 ' 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.232 --rc genhtml_branch_coverage=1 00:09:51.232 --rc genhtml_function_coverage=1 00:09:51.232 --rc genhtml_legend=1 00:09:51.232 --rc geninfo_all_blocks=1 00:09:51.232 --rc geninfo_unexecuted_blocks=1 00:09:51.232 00:09:51.232 ' 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.232 --rc genhtml_branch_coverage=1 00:09:51.232 --rc genhtml_function_coverage=1 00:09:51.232 --rc genhtml_legend=1 00:09:51.232 --rc geninfo_all_blocks=1 00:09:51.232 --rc geninfo_unexecuted_blocks=1 00:09:51.232 00:09:51.232 ' 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.232 --rc genhtml_branch_coverage=1 00:09:51.232 --rc genhtml_function_coverage=1 00:09:51.232 --rc genhtml_legend=1 00:09:51.232 --rc geninfo_all_blocks=1 00:09:51.232 --rc geninfo_unexecuted_blocks=1 00:09:51.232 00:09:51.232 ' 00:09:51.232 04:31:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:51.232 04:31:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:51.232 04:31:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:51.232 04:31:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.232 04:31:38 event -- common/autotest_common.sh@10 -- # set +x 00:09:51.232 ************************************ 00:09:51.232 START TEST event_perf 00:09:51.232 ************************************ 00:09:51.232 04:31:38 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:51.232 Running I/O for 1 seconds...[2024-11-27 04:31:38.820356] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:51.232 [2024-11-27 04:31:38.820505] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58174 ] 00:09:51.491 [2024-11-27 04:31:38.994525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.749 [2024-11-27 04:31:39.150036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.749 [2024-11-27 04:31:39.150204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.749 [2024-11-27 04:31:39.150356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.749 Running I/O for 1 seconds...[2024-11-27 04:31:39.150666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.125 00:09:53.125 lcore 0: 185911 00:09:53.125 lcore 1: 185909 00:09:53.125 lcore 2: 185909 00:09:53.125 lcore 3: 185910 00:09:53.125 done. 00:09:53.125 00:09:53.125 real 0m1.614s 00:09:53.125 user 0m4.353s 00:09:53.125 sys 0m0.135s 00:09:53.125 04:31:40 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.125 04:31:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:53.125 ************************************ 00:09:53.125 END TEST event_perf 00:09:53.125 ************************************ 00:09:53.125 04:31:40 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:53.125 04:31:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:53.125 04:31:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.125 04:31:40 event -- common/autotest_common.sh@10 -- # set +x 00:09:53.125 ************************************ 00:09:53.125 START TEST event_reactor 00:09:53.125 ************************************ 00:09:53.125 04:31:40 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:53.125 [2024-11-27 04:31:40.493600] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:53.125 [2024-11-27 04:31:40.493828] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58218 ] 00:09:53.125 [2024-11-27 04:31:40.680309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.382 [2024-11-27 04:31:40.816337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.757 test_start 00:09:54.757 oneshot 00:09:54.757 tick 100 00:09:54.757 tick 100 00:09:54.757 tick 250 00:09:54.757 tick 100 00:09:54.757 tick 100 00:09:54.757 tick 250 00:09:54.757 tick 100 00:09:54.757 tick 500 00:09:54.757 tick 100 00:09:54.757 tick 100 00:09:54.757 tick 250 00:09:54.757 tick 100 00:09:54.757 tick 100 00:09:54.757 test_end 00:09:54.757 00:09:54.757 real 0m1.619s 00:09:54.757 user 0m1.398s 00:09:54.757 sys 0m0.110s 00:09:54.757 04:31:42 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.757 04:31:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:54.757 ************************************ 00:09:54.757 END TEST event_reactor 00:09:54.757 ************************************ 00:09:54.757 04:31:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:54.757 04:31:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:54.757 04:31:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.757 04:31:42 event -- common/autotest_common.sh@10 -- # set +x 00:09:54.757 ************************************ 00:09:54.757 START TEST event_reactor_perf 00:09:54.757 ************************************ 00:09:54.757 04:31:42 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:54.757 [2024-11-27 04:31:42.174763] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:54.757 [2024-11-27 04:31:42.175035] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58250 ] 00:09:54.757 [2024-11-27 04:31:42.365886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.016 [2024-11-27 04:31:42.499084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.390 test_start 00:09:56.390 test_end 00:09:56.390 Performance: 275163 events per second 00:09:56.390 00:09:56.390 real 0m1.622s 00:09:56.390 user 0m1.397s 00:09:56.390 sys 0m0.112s 00:09:56.390 04:31:43 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.390 04:31:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:56.390 ************************************ 00:09:56.390 END TEST event_reactor_perf 00:09:56.390 ************************************ 00:09:56.390 04:31:43 event -- event/event.sh@49 -- # uname -s 00:09:56.390 04:31:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:56.390 04:31:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:56.390 04:31:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.390 04:31:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.390 04:31:43 event -- common/autotest_common.sh@10 -- # set +x 00:09:56.390 ************************************ 00:09:56.390 START TEST event_scheduler 00:09:56.390 ************************************ 00:09:56.390 04:31:43 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:56.390 * Looking for test storage... 00:09:56.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:56.390 04:31:43 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.390 04:31:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.390 04:31:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.390 04:31:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.390 04:31:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:56.390 04:31:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.390 04:31:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.390 --rc genhtml_branch_coverage=1 00:09:56.390 --rc genhtml_function_coverage=1 00:09:56.390 --rc genhtml_legend=1 00:09:56.390 --rc geninfo_all_blocks=1 00:09:56.390 --rc geninfo_unexecuted_blocks=1 00:09:56.390 00:09:56.391 ' 00:09:56.391 04:31:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.391 --rc genhtml_branch_coverage=1 00:09:56.391 --rc genhtml_function_coverage=1 00:09:56.391 --rc genhtml_legend=1 00:09:56.391 --rc geninfo_all_blocks=1 00:09:56.391 --rc geninfo_unexecuted_blocks=1 00:09:56.391 00:09:56.391 ' 00:09:56.391 04:31:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.391 --rc genhtml_branch_coverage=1 00:09:56.391 --rc genhtml_function_coverage=1 00:09:56.391 --rc genhtml_legend=1 00:09:56.391 --rc geninfo_all_blocks=1 00:09:56.391 --rc geninfo_unexecuted_blocks=1 00:09:56.391 00:09:56.391 ' 00:09:56.391 04:31:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.391 --rc genhtml_branch_coverage=1 00:09:56.391 --rc genhtml_function_coverage=1 00:09:56.391 --rc genhtml_legend=1 00:09:56.391 --rc geninfo_all_blocks=1 00:09:56.391 --rc geninfo_unexecuted_blocks=1 00:09:56.391 00:09:56.391 ' 00:09:56.391 04:31:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:56.391 04:31:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58326 00:09:56.391 04:31:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:56.391 04:31:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:56.391 04:31:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58326 00:09:56.391 04:31:44 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58326 ']' 00:09:56.391 04:31:44 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.391 04:31:44 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.391 04:31:44 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.391 04:31:44 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.391 04:31:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:56.649 [2024-11-27 04:31:44.109484] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:56.649 [2024-11-27 04:31:44.109691] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58326 ] 00:09:56.907 [2024-11-27 04:31:44.303858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.908 [2024-11-27 04:31:44.477136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.908 [2024-11-27 04:31:44.477232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.908 [2024-11-27 04:31:44.477386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.908 [2024-11-27 04:31:44.477392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.472 04:31:45 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.472 04:31:45 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:57.472 04:31:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:57.472 04:31:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.472 04:31:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:57.472 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:57.472 POWER: Cannot set governor of lcore 0 to userspace 00:09:57.472 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:57.472 POWER: Cannot set governor of lcore 0 to performance 00:09:57.472 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:57.472 POWER: Cannot set governor of lcore 0 to userspace 00:09:57.472 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:57.472 POWER: Cannot set governor of lcore 0 to userspace 00:09:57.472 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:57.472 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:57.472 POWER: Unable to set Power Management Environment for lcore 0 00:09:57.472 [2024-11-27 04:31:45.076359] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:09:57.472 [2024-11-27 04:31:45.076388] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:09:57.472 [2024-11-27 04:31:45.076404] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:57.472 [2024-11-27 04:31:45.076435] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:57.472 [2024-11-27 04:31:45.076449] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:57.472 [2024-11-27 04:31:45.076463] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:57.472 04:31:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.472 04:31:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:57.472 04:31:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.472 04:31:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 [2024-11-27 04:31:45.401813] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:58.040 04:31:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:58.040 04:31:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.040 04:31:45 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 ************************************ 00:09:58.040 START TEST scheduler_create_thread 00:09:58.040 ************************************ 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 2 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 3 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 4 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 5 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 6 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 7 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 8 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 9 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 10 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 04:31:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.976 04:31:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.976 04:31:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:58.976 04:31:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:58.976 04:31:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.976 04:31:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:00.351 04:31:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.351 00:10:00.351 real 0m2.138s 00:10:00.351 user 0m0.020s 00:10:00.351 sys 0m0.003s 00:10:00.351 04:31:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.351 04:31:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:00.351 ************************************ 00:10:00.351 END TEST scheduler_create_thread 00:10:00.351 ************************************ 00:10:00.351 04:31:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:00.351 04:31:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58326 00:10:00.351 04:31:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58326 ']' 00:10:00.351 04:31:47 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58326 00:10:00.351 04:31:47 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:00.351 04:31:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.351 04:31:47 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58326 00:10:00.351 04:31:47 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:00.351 04:31:47 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:00.351 04:31:47 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58326' 00:10:00.351 killing process with pid 58326 00:10:00.351 04:31:47 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58326 00:10:00.351 04:31:47 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58326 00:10:00.609 [2024-11-27 04:31:48.027676] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:01.544 00:10:01.544 real 0m5.324s 00:10:01.544 user 0m8.930s 00:10:01.544 sys 0m0.541s 00:10:01.544 04:31:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.544 04:31:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:01.544 ************************************ 00:10:01.544 END TEST event_scheduler 00:10:01.544 ************************************ 00:10:01.801 04:31:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:01.801 04:31:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:01.801 04:31:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:01.801 04:31:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.801 04:31:49 event -- common/autotest_common.sh@10 -- # set +x 00:10:01.801 ************************************ 00:10:01.801 START TEST app_repeat 00:10:01.801 ************************************ 00:10:01.801 04:31:49 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58432 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:01.801 Process app_repeat pid: 58432 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58432' 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:01.801 spdk_app_start Round 0 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:01.801 04:31:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58432 /var/tmp/spdk-nbd.sock 00:10:01.801 04:31:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58432 ']' 00:10:01.801 04:31:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:01.801 04:31:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:01.801 04:31:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:01.801 04:31:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.801 04:31:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:01.801 [2024-11-27 04:31:49.240172] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:01.801 [2024-11-27 04:31:49.240343] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58432 ] 00:10:01.801 [2024-11-27 04:31:49.421920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:02.059 [2024-11-27 04:31:49.581433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.059 [2024-11-27 04:31:49.581438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.995 04:31:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.995 04:31:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:02.995 04:31:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:03.254 Malloc0 00:10:03.254 04:31:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:03.513 Malloc1 00:10:03.513 04:31:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:03.513 04:31:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:03.772 /dev/nbd0 00:10:03.772 04:31:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:03.772 04:31:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:03.772 1+0 records in 00:10:03.772 1+0 records out 00:10:03.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318209 s, 12.9 MB/s 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:03.772 04:31:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:03.772 04:31:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:03.772 04:31:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:03.772 04:31:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:04.031 /dev/nbd1 00:10:04.031 04:31:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:04.031 04:31:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:04.031 1+0 records in 00:10:04.031 1+0 records out 00:10:04.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293326 s, 14.0 MB/s 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:04.031 04:31:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:04.031 04:31:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:04.031 04:31:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:04.031 04:31:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:04.031 04:31:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.031 04:31:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:04.598 04:31:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:04.598 { 00:10:04.598 "nbd_device": "/dev/nbd0", 00:10:04.598 "bdev_name": "Malloc0" 00:10:04.598 }, 00:10:04.598 { 00:10:04.598 "nbd_device": "/dev/nbd1", 00:10:04.598 "bdev_name": "Malloc1" 00:10:04.598 } 00:10:04.598 ]' 00:10:04.598 04:31:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:04.598 { 00:10:04.598 "nbd_device": "/dev/nbd0", 00:10:04.598 "bdev_name": "Malloc0" 00:10:04.598 }, 00:10:04.598 { 00:10:04.598 "nbd_device": "/dev/nbd1", 00:10:04.598 "bdev_name": "Malloc1" 00:10:04.598 } 00:10:04.598 ]' 00:10:04.598 04:31:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:04.598 04:31:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:04.598 /dev/nbd1' 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:04.598 /dev/nbd1' 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:04.598 256+0 records in 00:10:04.598 256+0 records out 00:10:04.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639494 s, 164 MB/s 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:04.598 256+0 records in 00:10:04.598 256+0 records out 00:10:04.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299331 s, 35.0 MB/s 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:04.598 256+0 records in 00:10:04.598 256+0 records out 00:10:04.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296512 s, 35.4 MB/s 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.598 04:31:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:04.857 04:31:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:04.857 04:31:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:04.857 04:31:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:04.857 04:31:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:04.857 04:31:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:04.857 04:31:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:04.857 04:31:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:04.857 04:31:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:04.857 04:31:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.857 04:31:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.116 04:31:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:05.682 04:31:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:05.682 04:31:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:05.961 04:31:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:07.376 [2024-11-27 04:31:54.644176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:07.376 [2024-11-27 04:31:54.771072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.376 [2024-11-27 04:31:54.771082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.376 [2024-11-27 04:31:54.960400] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:07.376 [2024-11-27 04:31:54.960506] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:09.276 04:31:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:09.276 04:31:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:09.276 spdk_app_start Round 1 00:10:09.276 04:31:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58432 /var/tmp/spdk-nbd.sock 00:10:09.276 04:31:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58432 ']' 00:10:09.276 04:31:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:09.276 04:31:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:09.276 04:31:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:09.276 04:31:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.276 04:31:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:09.276 04:31:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.276 04:31:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:09.276 04:31:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:09.842 Malloc0 00:10:09.842 04:31:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:10.100 Malloc1 00:10:10.101 04:31:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:10.101 04:31:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:10.359 /dev/nbd0 00:10:10.359 04:31:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:10.359 04:31:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:10.359 04:31:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:10.359 04:31:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:10.359 04:31:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:10.359 04:31:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:10.359 04:31:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:10.359 04:31:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:10.359 04:31:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:10.359 04:31:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:10.359 04:31:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:10.359 1+0 records in 00:10:10.360 1+0 records out 00:10:10.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278614 s, 14.7 MB/s 00:10:10.360 04:31:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:10.360 04:31:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:10.360 04:31:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:10.360 04:31:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:10.360 04:31:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:10.360 04:31:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:10.360 04:31:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:10.360 04:31:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:10.617 /dev/nbd1 00:10:10.617 04:31:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:10.617 04:31:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:10.617 04:31:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:10.618 1+0 records in 00:10:10.618 1+0 records out 00:10:10.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340776 s, 12.0 MB/s 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:10.618 04:31:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:10.618 04:31:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:10.618 04:31:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:10.618 04:31:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:10.618 04:31:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.618 04:31:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:10.876 04:31:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:10.876 { 00:10:10.876 "nbd_device": "/dev/nbd0", 00:10:10.876 "bdev_name": "Malloc0" 00:10:10.876 }, 00:10:10.876 { 00:10:10.876 "nbd_device": "/dev/nbd1", 00:10:10.876 "bdev_name": "Malloc1" 00:10:10.876 } 00:10:10.876 ]' 00:10:10.876 04:31:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:10.876 04:31:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:10.876 { 00:10:10.876 "nbd_device": "/dev/nbd0", 00:10:10.876 "bdev_name": "Malloc0" 00:10:10.876 }, 00:10:10.876 { 00:10:10.876 "nbd_device": "/dev/nbd1", 00:10:10.876 "bdev_name": "Malloc1" 00:10:10.876 } 00:10:10.876 ]' 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:11.133 /dev/nbd1' 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:11.133 /dev/nbd1' 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:11.133 256+0 records in 00:10:11.133 256+0 records out 00:10:11.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450871 s, 233 MB/s 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:11.133 256+0 records in 00:10:11.133 256+0 records out 00:10:11.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294903 s, 35.6 MB/s 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:11.133 256+0 records in 00:10:11.133 256+0 records out 00:10:11.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286589 s, 36.6 MB/s 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.133 04:31:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:11.391 04:31:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:11.391 04:31:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:11.391 04:31:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:11.391 04:31:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.391 04:31:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.391 04:31:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:11.391 04:31:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:11.391 04:31:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.391 04:31:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.391 04:31:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:11.649 04:31:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:11.907 04:31:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:11.907 04:31:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:12.473 04:31:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:13.514 [2024-11-27 04:32:01.039454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.773 [2024-11-27 04:32:01.164260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.773 [2024-11-27 04:32:01.164270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.773 [2024-11-27 04:32:01.353208] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:13.773 [2024-11-27 04:32:01.353281] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:15.675 04:32:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:15.675 spdk_app_start Round 2 00:10:15.675 04:32:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:15.675 04:32:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58432 /var/tmp/spdk-nbd.sock 00:10:15.675 04:32:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58432 ']' 00:10:15.675 04:32:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:15.675 04:32:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:15.675 04:32:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:15.675 04:32:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.675 04:32:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:15.675 04:32:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.675 04:32:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:15.675 04:32:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:16.241 Malloc0 00:10:16.241 04:32:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:16.499 Malloc1 00:10:16.499 04:32:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:16.499 04:32:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:16.757 /dev/nbd0 00:10:16.757 04:32:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:16.757 04:32:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:16.757 1+0 records in 00:10:16.757 1+0 records out 00:10:16.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223294 s, 18.3 MB/s 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:16.757 04:32:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:16.757 04:32:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:16.757 04:32:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:16.757 04:32:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:17.015 /dev/nbd1 00:10:17.015 04:32:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:17.015 04:32:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:17.015 04:32:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:17.015 04:32:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:17.015 04:32:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:17.015 04:32:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:17.015 04:32:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:17.015 04:32:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:17.015 04:32:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:17.015 04:32:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:17.273 04:32:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:17.273 1+0 records in 00:10:17.273 1+0 records out 00:10:17.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371367 s, 11.0 MB/s 00:10:17.273 04:32:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:17.273 04:32:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:17.273 04:32:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:17.274 04:32:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:17.274 04:32:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:17.274 04:32:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:17.274 04:32:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:17.274 04:32:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:17.274 04:32:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:17.274 04:32:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:17.532 04:32:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:17.532 { 00:10:17.532 "nbd_device": "/dev/nbd0", 00:10:17.532 "bdev_name": "Malloc0" 00:10:17.532 }, 00:10:17.532 { 00:10:17.532 "nbd_device": "/dev/nbd1", 00:10:17.532 "bdev_name": "Malloc1" 00:10:17.532 } 00:10:17.532 ]' 00:10:17.532 04:32:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:17.532 { 00:10:17.532 "nbd_device": "/dev/nbd0", 00:10:17.532 "bdev_name": "Malloc0" 00:10:17.532 }, 00:10:17.532 { 00:10:17.532 "nbd_device": "/dev/nbd1", 00:10:17.532 "bdev_name": "Malloc1" 00:10:17.532 } 00:10:17.532 ]' 00:10:17.532 04:32:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:17.532 /dev/nbd1' 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:17.532 /dev/nbd1' 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:17.532 256+0 records in 00:10:17.532 256+0 records out 00:10:17.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00584822 s, 179 MB/s 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:17.532 256+0 records in 00:10:17.532 256+0 records out 00:10:17.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297513 s, 35.2 MB/s 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:17.532 256+0 records in 00:10:17.532 256+0 records out 00:10:17.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.035382 s, 29.6 MB/s 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:17.532 04:32:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:17.790 04:32:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:17.790 04:32:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:17.790 04:32:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:17.790 04:32:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:17.790 04:32:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:17.790 04:32:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:17.790 04:32:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:17.790 04:32:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:17.790 04:32:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:17.790 04:32:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:18.355 04:32:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:18.613 04:32:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:18.613 04:32:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:18.613 04:32:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:18.613 04:32:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:18.613 04:32:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:18.613 04:32:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:18.613 04:32:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:18.613 04:32:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:18.613 04:32:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:18.613 04:32:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:18.613 04:32:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:18.613 04:32:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:18.613 04:32:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:19.180 04:32:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:20.113 [2024-11-27 04:32:07.576420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:20.113 [2024-11-27 04:32:07.701828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.113 [2024-11-27 04:32:07.701833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.422 [2024-11-27 04:32:07.891934] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:20.422 [2024-11-27 04:32:07.892024] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:22.325 04:32:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58432 /var/tmp/spdk-nbd.sock 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58432 ']' 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:22.325 04:32:09 event.app_repeat -- event/event.sh@39 -- # killprocess 58432 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58432 ']' 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58432 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58432 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.325 killing process with pid 58432 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.325 04:32:09 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58432' 00:10:22.326 04:32:09 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58432 00:10:22.326 04:32:09 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58432 00:10:23.259 spdk_app_start is called in Round 0. 00:10:23.259 Shutdown signal received, stop current app iteration 00:10:23.259 Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 reinitialization... 00:10:23.259 spdk_app_start is called in Round 1. 00:10:23.259 Shutdown signal received, stop current app iteration 00:10:23.259 Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 reinitialization... 00:10:23.259 spdk_app_start is called in Round 2. 00:10:23.259 Shutdown signal received, stop current app iteration 00:10:23.259 Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 reinitialization... 00:10:23.259 spdk_app_start is called in Round 3. 00:10:23.259 Shutdown signal received, stop current app iteration 00:10:23.259 04:32:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:23.259 04:32:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:23.259 00:10:23.259 real 0m21.600s 00:10:23.259 user 0m47.989s 00:10:23.259 sys 0m2.912s 00:10:23.259 04:32:10 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.259 04:32:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:23.259 ************************************ 00:10:23.259 END TEST app_repeat 00:10:23.259 ************************************ 00:10:23.259 04:32:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:23.259 04:32:10 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:23.259 04:32:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.259 04:32:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.259 04:32:10 event -- common/autotest_common.sh@10 -- # set +x 00:10:23.259 ************************************ 00:10:23.259 START TEST cpu_locks 00:10:23.259 ************************************ 00:10:23.259 04:32:10 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:23.518 * Looking for test storage... 00:10:23.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:23.519 04:32:10 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.519 04:32:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.519 04:32:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.519 04:32:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.519 04:32:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:23.519 04:32:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.519 04:32:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:23.519 04:32:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:23.519 04:32:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.519 04:32:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:23.519 04:32:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.519 04:32:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.519 04:32:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.519 04:32:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:23.519 04:32:11 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.519 04:32:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.519 --rc genhtml_branch_coverage=1 00:10:23.519 --rc genhtml_function_coverage=1 00:10:23.519 --rc genhtml_legend=1 00:10:23.519 --rc geninfo_all_blocks=1 00:10:23.519 --rc geninfo_unexecuted_blocks=1 00:10:23.519 00:10:23.519 ' 00:10:23.519 04:32:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.519 --rc genhtml_branch_coverage=1 00:10:23.519 --rc genhtml_function_coverage=1 00:10:23.519 --rc genhtml_legend=1 00:10:23.519 --rc geninfo_all_blocks=1 00:10:23.519 --rc geninfo_unexecuted_blocks=1 00:10:23.519 00:10:23.519 ' 00:10:23.519 04:32:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.519 --rc genhtml_branch_coverage=1 00:10:23.519 --rc genhtml_function_coverage=1 00:10:23.519 --rc genhtml_legend=1 00:10:23.519 --rc geninfo_all_blocks=1 00:10:23.519 --rc geninfo_unexecuted_blocks=1 00:10:23.519 00:10:23.519 ' 00:10:23.519 04:32:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.519 --rc genhtml_branch_coverage=1 00:10:23.519 --rc genhtml_function_coverage=1 00:10:23.519 --rc genhtml_legend=1 00:10:23.519 --rc geninfo_all_blocks=1 00:10:23.519 --rc geninfo_unexecuted_blocks=1 00:10:23.519 00:10:23.519 ' 00:10:23.519 04:32:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:23.519 04:32:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:23.519 04:32:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:23.519 04:32:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:23.519 04:32:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.519 04:32:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.519 04:32:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:23.519 ************************************ 00:10:23.519 START TEST default_locks 00:10:23.519 ************************************ 00:10:23.519 04:32:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:23.519 04:32:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58910 00:10:23.519 04:32:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:23.519 04:32:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58910 00:10:23.519 04:32:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58910 ']' 00:10:23.519 04:32:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.519 04:32:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.519 04:32:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.519 04:32:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.519 04:32:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:23.777 [2024-11-27 04:32:11.166303] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:23.777 [2024-11-27 04:32:11.166465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58910 ] 00:10:23.777 [2024-11-27 04:32:11.348591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.051 [2024-11-27 04:32:11.508006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.989 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.989 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:24.989 04:32:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58910 00:10:24.989 04:32:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58910 00:10:24.989 04:32:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58910 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58910 ']' 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58910 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58910 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.247 killing process with pid 58910 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58910' 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58910 00:10:25.247 04:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58910 00:10:27.777 04:32:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58910 00:10:27.777 04:32:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:27.777 04:32:14 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58910 00:10:27.777 04:32:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:27.777 04:32:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:27.777 04:32:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58910 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58910 ']' 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:27.777 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58910) - No such process 00:10:27.777 ERROR: process (pid: 58910) is no longer running 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:27.777 00:10:27.777 real 0m3.987s 00:10:27.777 user 0m3.967s 00:10:27.777 sys 0m0.713s 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.777 04:32:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:27.777 ************************************ 00:10:27.777 END TEST default_locks 00:10:27.777 ************************************ 00:10:27.777 04:32:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:27.777 04:32:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:27.777 04:32:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.777 04:32:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:27.777 ************************************ 00:10:27.777 START TEST default_locks_via_rpc 00:10:27.777 ************************************ 00:10:27.777 04:32:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:27.777 04:32:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58987 00:10:27.777 04:32:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58987 00:10:27.777 04:32:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58987 ']' 00:10:27.777 04:32:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.777 04:32:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.777 04:32:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:27.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.777 04:32:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.778 04:32:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.778 04:32:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.778 [2024-11-27 04:32:15.178423] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:27.778 [2024-11-27 04:32:15.178607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58987 ] 00:10:27.778 [2024-11-27 04:32:15.362592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.036 [2024-11-27 04:32:15.489835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58987 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58987 00:10:28.972 04:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58987 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58987 ']' 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58987 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58987 00:10:29.539 killing process with pid 58987 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58987' 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58987 00:10:29.539 04:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58987 00:10:32.070 00:10:32.070 real 0m4.078s 00:10:32.070 user 0m4.067s 00:10:32.070 sys 0m0.717s 00:10:32.070 04:32:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.070 04:32:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 ************************************ 00:10:32.070 END TEST default_locks_via_rpc 00:10:32.070 ************************************ 00:10:32.070 04:32:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:32.070 04:32:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.070 04:32:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.070 04:32:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 ************************************ 00:10:32.070 START TEST non_locking_app_on_locked_coremask 00:10:32.070 ************************************ 00:10:32.070 04:32:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:32.070 04:32:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59061 00:10:32.070 04:32:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59061 /var/tmp/spdk.sock 00:10:32.070 04:32:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59061 ']' 00:10:32.070 04:32:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:32.070 04:32:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.070 04:32:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.070 04:32:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.070 04:32:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.070 04:32:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 [2024-11-27 04:32:19.307158] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:32.070 [2024-11-27 04:32:19.307341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59061 ] 00:10:32.070 [2024-11-27 04:32:19.496096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.070 [2024-11-27 04:32:19.668062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59077 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59077 /var/tmp/spdk2.sock 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59077 ']' 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:33.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.004 04:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:33.261 [2024-11-27 04:32:20.667379] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:33.261 [2024-11-27 04:32:20.667560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59077 ] 00:10:33.261 [2024-11-27 04:32:20.867085] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:33.261 [2024-11-27 04:32:20.867174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.519 [2024-11-27 04:32:21.129023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.085 04:32:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.085 04:32:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:36.085 04:32:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59061 00:10:36.085 04:32:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59061 00:10:36.085 04:32:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59061 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59061 ']' 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59061 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59061 00:10:37.018 killing process with pid 59061 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59061' 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59061 00:10:37.018 04:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59061 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59077 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59077 ']' 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59077 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59077 00:10:41.205 killing process with pid 59077 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59077' 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59077 00:10:41.205 04:32:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59077 00:10:43.734 00:10:43.734 real 0m11.829s 00:10:43.734 user 0m12.457s 00:10:43.734 sys 0m1.494s 00:10:43.734 04:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.734 ************************************ 00:10:43.734 END TEST non_locking_app_on_locked_coremask 00:10:43.734 ************************************ 00:10:43.734 04:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:43.734 04:32:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:43.734 04:32:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.734 04:32:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.734 04:32:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:43.734 ************************************ 00:10:43.734 START TEST locking_app_on_unlocked_coremask 00:10:43.734 ************************************ 00:10:43.734 04:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:43.734 04:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59225 00:10:43.734 04:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59225 /var/tmp/spdk.sock 00:10:43.734 04:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:43.734 04:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59225 ']' 00:10:43.734 04:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.734 04:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.734 04:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.734 04:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.734 04:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:43.734 [2024-11-27 04:32:31.168319] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:43.734 [2024-11-27 04:32:31.168480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:10:43.734 [2024-11-27 04:32:31.339151] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:43.734 [2024-11-27 04:32:31.339208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.992 [2024-11-27 04:32:31.472841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59247 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59247 /var/tmp/spdk2.sock 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59247 ']' 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:44.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.927 04:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:44.927 [2024-11-27 04:32:32.506561] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:44.927 [2024-11-27 04:32:32.506748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59247 ] 00:10:45.185 [2024-11-27 04:32:32.710213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.443 [2024-11-27 04:32:32.979923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.968 04:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.968 04:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:47.968 04:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59247 00:10:47.968 04:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:47.968 04:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59247 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59225 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59225 ']' 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59225 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59225 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.533 killing process with pid 59225 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59225' 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59225 00:10:48.533 04:32:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59225 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59247 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59247 ']' 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59247 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59247 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.796 killing process with pid 59247 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59247' 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59247 00:10:53.796 04:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59247 00:10:55.699 00:10:55.699 real 0m11.739s 00:10:55.699 user 0m12.360s 00:10:55.699 sys 0m1.452s 00:10:55.699 04:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.699 04:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:55.700 ************************************ 00:10:55.700 END TEST locking_app_on_unlocked_coremask 00:10:55.700 ************************************ 00:10:55.700 04:32:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:55.700 04:32:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:55.700 04:32:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.700 04:32:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:55.700 ************************************ 00:10:55.700 START TEST locking_app_on_locked_coremask 00:10:55.700 ************************************ 00:10:55.700 04:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:55.700 04:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:55.700 04:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59395 00:10:55.700 04:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59395 /var/tmp/spdk.sock 00:10:55.700 04:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59395 ']' 00:10:55.700 04:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.700 04:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.700 04:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.700 04:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.700 04:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:55.700 [2024-11-27 04:32:42.964001] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:55.700 [2024-11-27 04:32:42.964169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59395 ] 00:10:55.700 [2024-11-27 04:32:43.132690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.700 [2024-11-27 04:32:43.262928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.632 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.632 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:56.632 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59414 00:10:56.632 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59414 /var/tmp/spdk2.sock 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59414 /var/tmp/spdk2.sock 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59414 /var/tmp/spdk2.sock 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59414 ']' 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:56.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.633 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:56.633 [2024-11-27 04:32:44.238753] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:56.633 [2024-11-27 04:32:44.238920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59414 ] 00:10:56.892 [2024-11-27 04:32:44.432836] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59395 has claimed it. 00:10:56.892 [2024-11-27 04:32:44.432923] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:57.459 ERROR: process (pid: 59414) is no longer running 00:10:57.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59414) - No such process 00:10:57.459 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.459 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:57.459 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:57.459 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.459 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:57.459 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.459 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59395 00:10:57.459 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59395 00:10:57.459 04:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59395 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59395 ']' 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59395 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59395 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.717 killing process with pid 59395 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59395' 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59395 00:10:57.717 04:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59395 00:11:00.247 00:11:00.247 real 0m4.674s 00:11:00.247 user 0m4.985s 00:11:00.247 sys 0m0.825s 00:11:00.247 04:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.247 ************************************ 00:11:00.247 END TEST locking_app_on_locked_coremask 00:11:00.247 ************************************ 00:11:00.247 04:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:00.247 04:32:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:00.247 04:32:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:00.247 04:32:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.247 04:32:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:00.247 ************************************ 00:11:00.247 START TEST locking_overlapped_coremask 00:11:00.247 ************************************ 00:11:00.247 04:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:00.247 04:32:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59483 00:11:00.247 04:32:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59483 /var/tmp/spdk.sock 00:11:00.247 04:32:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:00.247 04:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59483 ']' 00:11:00.247 04:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.247 04:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.247 04:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.247 04:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.247 04:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:00.247 [2024-11-27 04:32:47.696913] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:00.247 [2024-11-27 04:32:47.697110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59483 ] 00:11:00.506 [2024-11-27 04:32:47.890294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:00.506 [2024-11-27 04:32:48.086524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.506 [2024-11-27 04:32:48.086681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.506 [2024-11-27 04:32:48.086689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59507 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59507 /var/tmp/spdk2.sock 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59507 /var/tmp/spdk2.sock 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59507 /var/tmp/spdk2.sock 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59507 ']' 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.442 04:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:01.711 [2024-11-27 04:32:49.080152] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:01.711 [2024-11-27 04:32:49.080315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59507 ] 00:11:01.711 [2024-11-27 04:32:49.275892] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59483 has claimed it. 00:11:01.711 [2024-11-27 04:32:49.275989] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:02.276 ERROR: process (pid: 59507) is no longer running 00:11:02.276 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59507) - No such process 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59483 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59483 ']' 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59483 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59483 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.276 killing process with pid 59483 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59483' 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59483 00:11:02.276 04:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59483 00:11:04.803 00:11:04.803 real 0m4.521s 00:11:04.803 user 0m12.277s 00:11:04.803 sys 0m0.685s 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:04.803 ************************************ 00:11:04.803 END TEST locking_overlapped_coremask 00:11:04.803 ************************************ 00:11:04.803 04:32:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:04.803 04:32:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:04.803 04:32:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.803 04:32:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:04.803 ************************************ 00:11:04.803 START TEST locking_overlapped_coremask_via_rpc 00:11:04.803 ************************************ 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59571 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59571 /var/tmp/spdk.sock 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59571 ']' 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.803 04:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.803 [2024-11-27 04:32:52.252257] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:04.803 [2024-11-27 04:32:52.252419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59571 ] 00:11:05.061 [2024-11-27 04:32:52.428137] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:05.061 [2024-11-27 04:32:52.428213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.061 [2024-11-27 04:32:52.564490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.061 [2024-11-27 04:32:52.564583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.061 [2024-11-27 04:32:52.564589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.995 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.995 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:05.995 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59589 00:11:05.995 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:05.996 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59589 /var/tmp/spdk2.sock 00:11:05.996 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59589 ']' 00:11:05.996 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:05.996 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:05.996 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:05.996 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.996 04:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.996 [2024-11-27 04:32:53.570585] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:05.996 [2024-11-27 04:32:53.570737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59589 ] 00:11:06.329 [2024-11-27 04:32:53.767957] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:06.329 [2024-11-27 04:32:53.768029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.588 [2024-11-27 04:32:54.040711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.588 [2024-11-27 04:32:54.043923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.588 [2024-11-27 04:32:54.043943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.111 [2024-11-27 04:32:56.328984] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59571 has claimed it. 00:11:09.111 request: 00:11:09.111 { 00:11:09.111 "method": "framework_enable_cpumask_locks", 00:11:09.111 "req_id": 1 00:11:09.111 } 00:11:09.111 Got JSON-RPC error response 00:11:09.111 response: 00:11:09.111 { 00:11:09.111 "code": -32603, 00:11:09.111 "message": "Failed to claim CPU core: 2" 00:11:09.111 } 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59571 /var/tmp/spdk.sock 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59571 ']' 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59589 /var/tmp/spdk2.sock 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59589 ']' 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.111 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.368 ************************************ 00:11:09.368 END TEST locking_overlapped_coremask_via_rpc 00:11:09.368 ************************************ 00:11:09.368 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.368 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:09.368 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:09.368 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:09.368 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:09.368 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:09.369 00:11:09.369 real 0m4.787s 00:11:09.369 user 0m1.781s 00:11:09.369 sys 0m0.212s 00:11:09.369 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.369 04:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.369 04:32:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:09.369 04:32:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59571 ]] 00:11:09.369 04:32:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59571 00:11:09.369 04:32:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59571 ']' 00:11:09.369 04:32:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59571 00:11:09.369 04:32:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:09.369 04:32:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.369 04:32:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59571 00:11:09.626 killing process with pid 59571 00:11:09.626 04:32:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.626 04:32:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.626 04:32:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59571' 00:11:09.626 04:32:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59571 00:11:09.626 04:32:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59571 00:11:12.167 04:32:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59589 ]] 00:11:12.167 04:32:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59589 00:11:12.167 04:32:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59589 ']' 00:11:12.167 04:32:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59589 00:11:12.167 04:32:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:12.167 04:32:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.167 04:32:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59589 00:11:12.167 killing process with pid 59589 00:11:12.167 04:32:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:12.167 04:32:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:12.167 04:32:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59589' 00:11:12.167 04:32:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59589 00:11:12.167 04:32:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59589 00:11:14.068 04:33:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:14.068 Process with pid 59571 is not found 00:11:14.068 Process with pid 59589 is not found 00:11:14.068 04:33:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:14.068 04:33:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59571 ]] 00:11:14.069 04:33:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59571 00:11:14.069 04:33:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59571 ']' 00:11:14.069 04:33:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59571 00:11:14.069 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59571) - No such process 00:11:14.069 04:33:01 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59571 is not found' 00:11:14.069 04:33:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59589 ]] 00:11:14.069 04:33:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59589 00:11:14.069 04:33:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59589 ']' 00:11:14.069 04:33:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59589 00:11:14.069 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59589) - No such process 00:11:14.069 04:33:01 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59589 is not found' 00:11:14.069 04:33:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:14.069 00:11:14.069 real 0m50.675s 00:11:14.069 user 1m27.846s 00:11:14.069 sys 0m7.277s 00:11:14.069 04:33:01 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.069 04:33:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:14.069 ************************************ 00:11:14.069 END TEST cpu_locks 00:11:14.069 ************************************ 00:11:14.069 00:11:14.069 real 1m22.981s 00:11:14.069 user 2m32.145s 00:11:14.069 sys 0m11.356s 00:11:14.069 ************************************ 00:11:14.069 END TEST event 00:11:14.069 ************************************ 00:11:14.069 04:33:01 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.069 04:33:01 event -- common/autotest_common.sh@10 -- # set +x 00:11:14.069 04:33:01 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:14.069 04:33:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.069 04:33:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.069 04:33:01 -- common/autotest_common.sh@10 -- # set +x 00:11:14.069 ************************************ 00:11:14.069 START TEST thread 00:11:14.069 ************************************ 00:11:14.069 04:33:01 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:14.069 * Looking for test storage... 00:11:14.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:14.069 04:33:01 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.069 04:33:01 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.069 04:33:01 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.327 04:33:01 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.327 04:33:01 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.327 04:33:01 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.327 04:33:01 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.327 04:33:01 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.327 04:33:01 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.327 04:33:01 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.327 04:33:01 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.327 04:33:01 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.327 04:33:01 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.327 04:33:01 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.327 04:33:01 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.327 04:33:01 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:14.327 04:33:01 thread -- scripts/common.sh@345 -- # : 1 00:11:14.327 04:33:01 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.327 04:33:01 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.327 04:33:01 thread -- scripts/common.sh@365 -- # decimal 1 00:11:14.327 04:33:01 thread -- scripts/common.sh@353 -- # local d=1 00:11:14.328 04:33:01 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.328 04:33:01 thread -- scripts/common.sh@355 -- # echo 1 00:11:14.328 04:33:01 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.328 04:33:01 thread -- scripts/common.sh@366 -- # decimal 2 00:11:14.328 04:33:01 thread -- scripts/common.sh@353 -- # local d=2 00:11:14.328 04:33:01 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.328 04:33:01 thread -- scripts/common.sh@355 -- # echo 2 00:11:14.328 04:33:01 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.328 04:33:01 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.328 04:33:01 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.328 04:33:01 thread -- scripts/common.sh@368 -- # return 0 00:11:14.328 04:33:01 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.328 04:33:01 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.328 --rc genhtml_branch_coverage=1 00:11:14.328 --rc genhtml_function_coverage=1 00:11:14.328 --rc genhtml_legend=1 00:11:14.328 --rc geninfo_all_blocks=1 00:11:14.328 --rc geninfo_unexecuted_blocks=1 00:11:14.328 00:11:14.328 ' 00:11:14.328 04:33:01 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.328 --rc genhtml_branch_coverage=1 00:11:14.328 --rc genhtml_function_coverage=1 00:11:14.328 --rc genhtml_legend=1 00:11:14.328 --rc geninfo_all_blocks=1 00:11:14.328 --rc geninfo_unexecuted_blocks=1 00:11:14.328 00:11:14.328 ' 00:11:14.328 04:33:01 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.328 --rc genhtml_branch_coverage=1 00:11:14.328 --rc genhtml_function_coverage=1 00:11:14.328 --rc genhtml_legend=1 00:11:14.328 --rc geninfo_all_blocks=1 00:11:14.328 --rc geninfo_unexecuted_blocks=1 00:11:14.328 00:11:14.328 ' 00:11:14.328 04:33:01 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.328 --rc genhtml_branch_coverage=1 00:11:14.328 --rc genhtml_function_coverage=1 00:11:14.328 --rc genhtml_legend=1 00:11:14.328 --rc geninfo_all_blocks=1 00:11:14.328 --rc geninfo_unexecuted_blocks=1 00:11:14.328 00:11:14.328 ' 00:11:14.328 04:33:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:14.328 04:33:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:14.328 04:33:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.328 04:33:01 thread -- common/autotest_common.sh@10 -- # set +x 00:11:14.328 ************************************ 00:11:14.328 START TEST thread_poller_perf 00:11:14.328 ************************************ 00:11:14.328 04:33:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:14.328 [2024-11-27 04:33:01.819908] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:14.328 [2024-11-27 04:33:01.820067] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59784 ] 00:11:14.586 [2024-11-27 04:33:02.008348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.586 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:14.586 [2024-11-27 04:33:02.163397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.962 [2024-11-27T04:33:03.585Z] ====================================== 00:11:15.962 [2024-11-27T04:33:03.585Z] busy:2212546396 (cyc) 00:11:15.962 [2024-11-27T04:33:03.585Z] total_run_count: 294000 00:11:15.962 [2024-11-27T04:33:03.585Z] tsc_hz: 2200000000 (cyc) 00:11:15.962 [2024-11-27T04:33:03.585Z] ====================================== 00:11:15.962 [2024-11-27T04:33:03.585Z] poller_cost: 7525 (cyc), 3420 (nsec) 00:11:15.962 00:11:15.962 real 0m1.628s 00:11:15.962 user 0m1.426s 00:11:15.962 sys 0m0.094s 00:11:15.962 04:33:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.962 04:33:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:15.962 ************************************ 00:11:15.962 END TEST thread_poller_perf 00:11:15.962 ************************************ 00:11:15.962 04:33:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:15.962 04:33:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:15.962 04:33:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.962 04:33:03 thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.962 ************************************ 00:11:15.962 START TEST thread_poller_perf 00:11:15.962 ************************************ 00:11:15.962 04:33:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:15.962 [2024-11-27 04:33:03.506877] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:15.962 [2024-11-27 04:33:03.507067] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59826 ] 00:11:16.220 [2024-11-27 04:33:03.700086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.478 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:16.478 [2024-11-27 04:33:03.852227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.853 [2024-11-27T04:33:05.476Z] ====================================== 00:11:17.853 [2024-11-27T04:33:05.476Z] busy:2203930646 (cyc) 00:11:17.853 [2024-11-27T04:33:05.476Z] total_run_count: 3754000 00:11:17.853 [2024-11-27T04:33:05.476Z] tsc_hz: 2200000000 (cyc) 00:11:17.853 [2024-11-27T04:33:05.476Z] ====================================== 00:11:17.853 [2024-11-27T04:33:05.476Z] poller_cost: 587 (cyc), 266 (nsec) 00:11:17.853 00:11:17.853 real 0m1.632s 00:11:17.853 user 0m1.411s 00:11:17.853 sys 0m0.112s 00:11:17.853 04:33:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.853 04:33:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:17.853 ************************************ 00:11:17.853 END TEST thread_poller_perf 00:11:17.853 ************************************ 00:11:17.853 04:33:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:17.853 00:11:17.853 real 0m3.530s 00:11:17.853 user 0m2.986s 00:11:17.853 sys 0m0.331s 00:11:17.853 04:33:05 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.853 04:33:05 thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.853 ************************************ 00:11:17.853 END TEST thread 00:11:17.853 ************************************ 00:11:17.853 04:33:05 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:17.853 04:33:05 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:17.853 04:33:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:17.853 04:33:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.853 04:33:05 -- common/autotest_common.sh@10 -- # set +x 00:11:17.853 ************************************ 00:11:17.853 START TEST app_cmdline 00:11:17.853 ************************************ 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:17.853 * Looking for test storage... 00:11:17.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.853 04:33:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:17.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.853 --rc genhtml_branch_coverage=1 00:11:17.853 --rc genhtml_function_coverage=1 00:11:17.853 --rc genhtml_legend=1 00:11:17.853 --rc geninfo_all_blocks=1 00:11:17.853 --rc geninfo_unexecuted_blocks=1 00:11:17.853 00:11:17.853 ' 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:17.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.853 --rc genhtml_branch_coverage=1 00:11:17.853 --rc genhtml_function_coverage=1 00:11:17.853 --rc genhtml_legend=1 00:11:17.853 --rc geninfo_all_blocks=1 00:11:17.853 --rc geninfo_unexecuted_blocks=1 00:11:17.853 00:11:17.853 ' 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:17.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.853 --rc genhtml_branch_coverage=1 00:11:17.853 --rc genhtml_function_coverage=1 00:11:17.853 --rc genhtml_legend=1 00:11:17.853 --rc geninfo_all_blocks=1 00:11:17.853 --rc geninfo_unexecuted_blocks=1 00:11:17.853 00:11:17.853 ' 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:17.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.853 --rc genhtml_branch_coverage=1 00:11:17.853 --rc genhtml_function_coverage=1 00:11:17.853 --rc genhtml_legend=1 00:11:17.853 --rc geninfo_all_blocks=1 00:11:17.853 --rc geninfo_unexecuted_blocks=1 00:11:17.853 00:11:17.853 ' 00:11:17.853 04:33:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:17.853 04:33:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59915 00:11:17.853 04:33:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59915 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59915 ']' 00:11:17.853 04:33:05 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.853 04:33:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:18.111 [2024-11-27 04:33:05.507137] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:18.111 [2024-11-27 04:33:05.507328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59915 ] 00:11:18.111 [2024-11-27 04:33:05.695423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.369 [2024-11-27 04:33:05.827395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.304 04:33:06 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.304 04:33:06 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:19.304 04:33:06 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:19.562 { 00:11:19.562 "version": "SPDK v25.01-pre git sha1 a640d9f98", 00:11:19.562 "fields": { 00:11:19.562 "major": 25, 00:11:19.562 "minor": 1, 00:11:19.562 "patch": 0, 00:11:19.562 "suffix": "-pre", 00:11:19.562 "commit": "a640d9f98" 00:11:19.562 } 00:11:19.562 } 00:11:19.562 04:33:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:19.562 04:33:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:19.562 04:33:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:19.562 04:33:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:19.562 04:33:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:19.562 04:33:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:19.562 04:33:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.562 04:33:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:19.562 04:33:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:19.562 04:33:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:19.562 04:33:07 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:19.820 request: 00:11:19.820 { 00:11:19.820 "method": "env_dpdk_get_mem_stats", 00:11:19.820 "req_id": 1 00:11:19.820 } 00:11:19.820 Got JSON-RPC error response 00:11:19.820 response: 00:11:19.820 { 00:11:19.820 "code": -32601, 00:11:19.820 "message": "Method not found" 00:11:19.820 } 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:19.820 04:33:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59915 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59915 ']' 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59915 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59915 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.820 killing process with pid 59915 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59915' 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@973 -- # kill 59915 00:11:19.820 04:33:07 app_cmdline -- common/autotest_common.sh@978 -- # wait 59915 00:11:22.347 00:11:22.347 real 0m4.460s 00:11:22.347 user 0m5.021s 00:11:22.347 sys 0m0.653s 00:11:22.347 04:33:09 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.347 04:33:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:22.347 ************************************ 00:11:22.347 END TEST app_cmdline 00:11:22.347 ************************************ 00:11:22.347 04:33:09 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:22.347 04:33:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:22.347 04:33:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.347 04:33:09 -- common/autotest_common.sh@10 -- # set +x 00:11:22.347 ************************************ 00:11:22.347 START TEST version 00:11:22.347 ************************************ 00:11:22.347 04:33:09 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:22.347 * Looking for test storage... 00:11:22.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:22.347 04:33:09 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:22.347 04:33:09 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:22.347 04:33:09 version -- common/autotest_common.sh@1693 -- # lcov --version 00:11:22.347 04:33:09 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:22.347 04:33:09 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.347 04:33:09 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.347 04:33:09 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.347 04:33:09 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.347 04:33:09 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.347 04:33:09 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.347 04:33:09 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.347 04:33:09 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.347 04:33:09 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.347 04:33:09 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.347 04:33:09 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.347 04:33:09 version -- scripts/common.sh@344 -- # case "$op" in 00:11:22.347 04:33:09 version -- scripts/common.sh@345 -- # : 1 00:11:22.347 04:33:09 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.347 04:33:09 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.347 04:33:09 version -- scripts/common.sh@365 -- # decimal 1 00:11:22.347 04:33:09 version -- scripts/common.sh@353 -- # local d=1 00:11:22.347 04:33:09 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.347 04:33:09 version -- scripts/common.sh@355 -- # echo 1 00:11:22.347 04:33:09 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.347 04:33:09 version -- scripts/common.sh@366 -- # decimal 2 00:11:22.347 04:33:09 version -- scripts/common.sh@353 -- # local d=2 00:11:22.347 04:33:09 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.347 04:33:09 version -- scripts/common.sh@355 -- # echo 2 00:11:22.347 04:33:09 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.347 04:33:09 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.347 04:33:09 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.347 04:33:09 version -- scripts/common.sh@368 -- # return 0 00:11:22.347 04:33:09 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.347 04:33:09 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.348 --rc genhtml_branch_coverage=1 00:11:22.348 --rc genhtml_function_coverage=1 00:11:22.348 --rc genhtml_legend=1 00:11:22.348 --rc geninfo_all_blocks=1 00:11:22.348 --rc geninfo_unexecuted_blocks=1 00:11:22.348 00:11:22.348 ' 00:11:22.348 04:33:09 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.348 --rc genhtml_branch_coverage=1 00:11:22.348 --rc genhtml_function_coverage=1 00:11:22.348 --rc genhtml_legend=1 00:11:22.348 --rc geninfo_all_blocks=1 00:11:22.348 --rc geninfo_unexecuted_blocks=1 00:11:22.348 00:11:22.348 ' 00:11:22.348 04:33:09 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.348 --rc genhtml_branch_coverage=1 00:11:22.348 --rc genhtml_function_coverage=1 00:11:22.348 --rc genhtml_legend=1 00:11:22.348 --rc geninfo_all_blocks=1 00:11:22.348 --rc geninfo_unexecuted_blocks=1 00:11:22.348 00:11:22.348 ' 00:11:22.348 04:33:09 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.348 --rc genhtml_branch_coverage=1 00:11:22.348 --rc genhtml_function_coverage=1 00:11:22.348 --rc genhtml_legend=1 00:11:22.348 --rc geninfo_all_blocks=1 00:11:22.348 --rc geninfo_unexecuted_blocks=1 00:11:22.348 00:11:22.348 ' 00:11:22.348 04:33:09 version -- app/version.sh@17 -- # get_header_version major 00:11:22.348 04:33:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:22.348 04:33:09 version -- app/version.sh@14 -- # cut -f2 00:11:22.348 04:33:09 version -- app/version.sh@14 -- # tr -d '"' 00:11:22.348 04:33:09 version -- app/version.sh@17 -- # major=25 00:11:22.348 04:33:09 version -- app/version.sh@18 -- # get_header_version minor 00:11:22.348 04:33:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:22.348 04:33:09 version -- app/version.sh@14 -- # cut -f2 00:11:22.348 04:33:09 version -- app/version.sh@14 -- # tr -d '"' 00:11:22.348 04:33:09 version -- app/version.sh@18 -- # minor=1 00:11:22.348 04:33:09 version -- app/version.sh@19 -- # get_header_version patch 00:11:22.348 04:33:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:22.348 04:33:09 version -- app/version.sh@14 -- # cut -f2 00:11:22.348 04:33:09 version -- app/version.sh@14 -- # tr -d '"' 00:11:22.348 04:33:09 version -- app/version.sh@19 -- # patch=0 00:11:22.348 04:33:09 version -- app/version.sh@20 -- # get_header_version suffix 00:11:22.348 04:33:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:22.348 04:33:09 version -- app/version.sh@14 -- # cut -f2 00:11:22.348 04:33:09 version -- app/version.sh@14 -- # tr -d '"' 00:11:22.348 04:33:09 version -- app/version.sh@20 -- # suffix=-pre 00:11:22.348 04:33:09 version -- app/version.sh@22 -- # version=25.1 00:11:22.348 04:33:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:22.348 04:33:09 version -- app/version.sh@28 -- # version=25.1rc0 00:11:22.348 04:33:09 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:22.348 04:33:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:22.348 04:33:09 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:22.348 04:33:09 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:22.348 00:11:22.348 real 0m0.249s 00:11:22.348 user 0m0.158s 00:11:22.348 sys 0m0.128s 00:11:22.348 04:33:09 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.348 04:33:09 version -- common/autotest_common.sh@10 -- # set +x 00:11:22.348 ************************************ 00:11:22.348 END TEST version 00:11:22.348 ************************************ 00:11:22.606 04:33:09 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:22.606 04:33:09 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:11:22.606 04:33:09 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:22.606 04:33:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:22.607 04:33:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.607 04:33:09 -- common/autotest_common.sh@10 -- # set +x 00:11:22.607 ************************************ 00:11:22.607 START TEST bdev_raid 00:11:22.607 ************************************ 00:11:22.607 04:33:09 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:22.607 * Looking for test storage... 00:11:22.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@345 -- # : 1 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.607 04:33:10 bdev_raid -- scripts/common.sh@368 -- # return 0 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.607 --rc genhtml_branch_coverage=1 00:11:22.607 --rc genhtml_function_coverage=1 00:11:22.607 --rc genhtml_legend=1 00:11:22.607 --rc geninfo_all_blocks=1 00:11:22.607 --rc geninfo_unexecuted_blocks=1 00:11:22.607 00:11:22.607 ' 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.607 --rc genhtml_branch_coverage=1 00:11:22.607 --rc genhtml_function_coverage=1 00:11:22.607 --rc genhtml_legend=1 00:11:22.607 --rc geninfo_all_blocks=1 00:11:22.607 --rc geninfo_unexecuted_blocks=1 00:11:22.607 00:11:22.607 ' 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.607 --rc genhtml_branch_coverage=1 00:11:22.607 --rc genhtml_function_coverage=1 00:11:22.607 --rc genhtml_legend=1 00:11:22.607 --rc geninfo_all_blocks=1 00:11:22.607 --rc geninfo_unexecuted_blocks=1 00:11:22.607 00:11:22.607 ' 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.607 --rc genhtml_branch_coverage=1 00:11:22.607 --rc genhtml_function_coverage=1 00:11:22.607 --rc genhtml_legend=1 00:11:22.607 --rc geninfo_all_blocks=1 00:11:22.607 --rc geninfo_unexecuted_blocks=1 00:11:22.607 00:11:22.607 ' 00:11:22.607 04:33:10 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:22.607 04:33:10 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:11:22.607 04:33:10 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:11:22.607 04:33:10 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:11:22.607 04:33:10 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:11:22.607 04:33:10 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:11:22.607 04:33:10 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.607 04:33:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.607 ************************************ 00:11:22.607 START TEST raid1_resize_data_offset_test 00:11:22.607 ************************************ 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60103 00:11:22.607 Process raid pid: 60103 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60103' 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60103 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60103 ']' 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.607 04:33:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.866 [2024-11-27 04:33:10.343041] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:22.866 [2024-11-27 04:33:10.343225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.125 [2024-11-27 04:33:10.533827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.125 [2024-11-27 04:33:10.692420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.383 [2024-11-27 04:33:10.932941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.383 [2024-11-27 04:33:10.933032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 malloc0 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 malloc1 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 null0 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 [2024-11-27 04:33:11.498529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:11:23.951 [2024-11-27 04:33:11.500999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:23.951 [2024-11-27 04:33:11.501082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:11:23.951 [2024-11-27 04:33:11.501311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:23.951 [2024-11-27 04:33:11.501345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:11:23.951 [2024-11-27 04:33:11.501726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:23.951 [2024-11-27 04:33:11.501985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:23.951 [2024-11-27 04:33:11.502020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:23.951 [2024-11-27 04:33:11.502235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 [2024-11-27 04:33:11.554553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.951 04:33:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.518 malloc2 00:11:24.518 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.518 04:33:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:11:24.518 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.518 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.518 [2024-11-27 04:33:12.120980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:24.518 [2024-11-27 04:33:12.138253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:24.518 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.777 [2024-11-27 04:33:12.140693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60103 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60103 ']' 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60103 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60103 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.777 killing process with pid 60103 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60103' 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60103 00:11:24.777 04:33:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60103 00:11:24.777 [2024-11-27 04:33:12.226219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.777 [2024-11-27 04:33:12.227052] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:11:24.777 [2024-11-27 04:33:12.227129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.777 [2024-11-27 04:33:12.227156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:11:24.777 [2024-11-27 04:33:12.259232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.777 [2024-11-27 04:33:12.259656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.777 [2024-11-27 04:33:12.259691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:26.712 [2024-11-27 04:33:13.923323] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.647 04:33:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:11:27.647 00:11:27.647 real 0m4.815s 00:11:27.647 user 0m4.749s 00:11:27.647 sys 0m0.687s 00:11:27.647 04:33:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.647 04:33:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.647 ************************************ 00:11:27.647 END TEST raid1_resize_data_offset_test 00:11:27.647 ************************************ 00:11:27.647 04:33:15 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:11:27.647 04:33:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.647 04:33:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.647 04:33:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.647 ************************************ 00:11:27.647 START TEST raid0_resize_superblock_test 00:11:27.647 ************************************ 00:11:27.647 04:33:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:11:27.647 04:33:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:11:27.647 04:33:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60186 00:11:27.647 04:33:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:27.648 04:33:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60186' 00:11:27.648 Process raid pid: 60186 00:11:27.648 04:33:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60186 00:11:27.648 04:33:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60186 ']' 00:11:27.648 04:33:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.648 04:33:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.648 04:33:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.648 04:33:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.648 04:33:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.648 [2024-11-27 04:33:15.166645] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:27.648 [2024-11-27 04:33:15.166847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.907 [2024-11-27 04:33:15.361897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.907 [2024-11-27 04:33:15.516788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.166 [2024-11-27 04:33:15.736370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.166 [2024-11-27 04:33:15.736419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.751 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.751 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:28.751 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:11:28.751 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.751 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.319 malloc0 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.319 [2024-11-27 04:33:16.719936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:29.319 [2024-11-27 04:33:16.720011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.319 [2024-11-27 04:33:16.720045] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:29.319 [2024-11-27 04:33:16.720064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.319 [2024-11-27 04:33:16.722885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.319 [2024-11-27 04:33:16.722936] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:29.319 pt0 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.319 77984d08-5b59-46e2-9f19-74e9627efb1a 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.319 932868ab-a75b-4728-bbfc-e31eda9c6c86 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.319 c939ab6d-7e34-4bf5-9266-f4cb73287a8d 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.319 [2024-11-27 04:33:16.861038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 932868ab-a75b-4728-bbfc-e31eda9c6c86 is claimed 00:11:29.319 [2024-11-27 04:33:16.861201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c939ab6d-7e34-4bf5-9266-f4cb73287a8d is claimed 00:11:29.319 [2024-11-27 04:33:16.861445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:29.319 [2024-11-27 04:33:16.861485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:11:29.319 [2024-11-27 04:33:16.861896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:29.319 [2024-11-27 04:33:16.862258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:29.319 [2024-11-27 04:33:16.862288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:29.319 [2024-11-27 04:33:16.862513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:11:29.319 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.579 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:11:29.579 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:29.579 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:29.579 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:29.579 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:11:29.579 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.579 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.579 [2024-11-27 04:33:16.973376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.579 04:33:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.579 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:29.579 04:33:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.579 [2024-11-27 04:33:17.021382] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:29.579 [2024-11-27 04:33:17.021426] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '932868ab-a75b-4728-bbfc-e31eda9c6c86' was resized: old size 131072, new size 204800 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.579 [2024-11-27 04:33:17.029221] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:29.579 [2024-11-27 04:33:17.029261] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c939ab6d-7e34-4bf5-9266-f4cb73287a8d' was resized: old size 131072, new size 204800 00:11:29.579 [2024-11-27 04:33:17.029300] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.579 [2024-11-27 04:33:17.149439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.579 [2024-11-27 04:33:17.193173] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:29.579 [2024-11-27 04:33:17.193276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:29.579 [2024-11-27 04:33:17.193302] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.579 [2024-11-27 04:33:17.193323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:29.579 [2024-11-27 04:33:17.193469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.579 [2024-11-27 04:33:17.193525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.579 [2024-11-27 04:33:17.193548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.579 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.839 [2024-11-27 04:33:17.201019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:29.839 [2024-11-27 04:33:17.201082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.839 [2024-11-27 04:33:17.201111] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:29.839 [2024-11-27 04:33:17.201129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.839 [2024-11-27 04:33:17.204010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.839 [2024-11-27 04:33:17.204063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:29.839 pt0 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.839 [2024-11-27 04:33:17.206385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 932868ab-a75b-4728-bbfc-e31eda9c6c86 00:11:29.839 [2024-11-27 04:33:17.206478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 932868ab-a75b-4728-bbfc-e31eda9c6c86 is claimed 00:11:29.839 [2024-11-27 04:33:17.206619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c939ab6d-7e34-4bf5-9266-f4cb73287a8d 00:11:29.839 [2024-11-27 04:33:17.206654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c939ab6d-7e34-4bf5-9266-f4cb73287a8d is claimed 00:11:29.839 [2024-11-27 04:33:17.206845] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c939ab6d-7e34-4bf5-9266-f4cb73287a8d (2) smaller than existing raid bdev Raid (3) 00:11:29.839 [2024-11-27 04:33:17.206888] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 932868ab-a75b-4728-bbfc-e31eda9c6c86: File exists 00:11:29.839 [2024-11-27 04:33:17.206938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:29.839 [2024-11-27 04:33:17.206958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:11:29.839 [2024-11-27 04:33:17.207280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:29.839 [2024-11-27 04:33:17.207504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:29.839 [2024-11-27 04:33:17.207530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:11:29.839 [2024-11-27 04:33:17.207720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:11:29.839 [2024-11-27 04:33:17.221406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60186 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60186 ']' 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60186 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60186 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.839 killing process with pid 60186 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60186' 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60186 00:11:29.839 [2024-11-27 04:33:17.298041] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.839 04:33:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60186 00:11:29.839 [2024-11-27 04:33:17.298175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.839 [2024-11-27 04:33:17.298250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.839 [2024-11-27 04:33:17.298265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:11:31.214 [2024-11-27 04:33:18.599001] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.161 04:33:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:11:32.161 00:11:32.161 real 0m4.622s 00:11:32.161 user 0m4.938s 00:11:32.161 sys 0m0.660s 00:11:32.161 04:33:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.161 04:33:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.161 ************************************ 00:11:32.161 END TEST raid0_resize_superblock_test 00:11:32.161 ************************************ 00:11:32.161 04:33:19 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:11:32.161 04:33:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.161 04:33:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.161 04:33:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.161 ************************************ 00:11:32.161 START TEST raid1_resize_superblock_test 00:11:32.161 ************************************ 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60285 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60285' 00:11:32.161 Process raid pid: 60285 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60285 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60285 ']' 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.161 04:33:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.420 [2024-11-27 04:33:19.864350] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:32.420 [2024-11-27 04:33:19.864593] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.678 [2024-11-27 04:33:20.052864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.678 [2024-11-27 04:33:20.185820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.936 [2024-11-27 04:33:20.390405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.936 [2024-11-27 04:33:20.390446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.503 04:33:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.503 04:33:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.503 04:33:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:11:33.503 04:33:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.503 04:33:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.071 malloc0 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.071 [2024-11-27 04:33:21.399758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:34.071 [2024-11-27 04:33:21.399850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.071 [2024-11-27 04:33:21.399885] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:34.071 [2024-11-27 04:33:21.399909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.071 [2024-11-27 04:33:21.403305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.071 [2024-11-27 04:33:21.403360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:34.071 pt0 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.071 bbb98802-5c50-4b1d-b950-70896190a35a 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.071 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.072 4773b3f8-d3de-45cd-9a1d-a6f08995d72d 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.072 167e8081-aadd-456c-8c79-e656ba1a44ed 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.072 [2024-11-27 04:33:21.537212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4773b3f8-d3de-45cd-9a1d-a6f08995d72d is claimed 00:11:34.072 [2024-11-27 04:33:21.537333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 167e8081-aadd-456c-8c79-e656ba1a44ed is claimed 00:11:34.072 [2024-11-27 04:33:21.537534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:34.072 [2024-11-27 04:33:21.537561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:11:34.072 [2024-11-27 04:33:21.537938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:34.072 [2024-11-27 04:33:21.538230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:34.072 [2024-11-27 04:33:21.538249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:34.072 [2024-11-27 04:33:21.538448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.072 [2024-11-27 04:33:21.645538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.072 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.331 [2024-11-27 04:33:21.697544] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:34.331 [2024-11-27 04:33:21.697584] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4773b3f8-d3de-45cd-9a1d-a6f08995d72d' was resized: old size 131072, new size 204800 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.331 [2024-11-27 04:33:21.705408] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:34.331 [2024-11-27 04:33:21.705440] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '167e8081-aadd-456c-8c79-e656ba1a44ed' was resized: old size 131072, new size 204800 00:11:34.331 [2024-11-27 04:33:21.705482] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:34.331 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:11:34.332 [2024-11-27 04:33:21.817562] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.332 [2024-11-27 04:33:21.865332] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:34.332 [2024-11-27 04:33:21.865441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:34.332 [2024-11-27 04:33:21.865483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:34.332 [2024-11-27 04:33:21.865708] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.332 [2024-11-27 04:33:21.866041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.332 [2024-11-27 04:33:21.866141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.332 [2024-11-27 04:33:21.866178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.332 [2024-11-27 04:33:21.873182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:34.332 [2024-11-27 04:33:21.873246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.332 [2024-11-27 04:33:21.873276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:34.332 [2024-11-27 04:33:21.873297] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.332 [2024-11-27 04:33:21.876168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.332 [2024-11-27 04:33:21.876222] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:34.332 pt0 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.332 [2024-11-27 04:33:21.878528] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4773b3f8-d3de-45cd-9a1d-a6f08995d72d 00:11:34.332 [2024-11-27 04:33:21.878617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4773b3f8-d3de-45cd-9a1d-a6f08995d72d is claimed 00:11:34.332 [2024-11-27 04:33:21.878766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 167e8081-aadd-456c-8c79-e656ba1a44ed 00:11:34.332 [2024-11-27 04:33:21.878826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 167e8081-aadd-456c-8c79-e656ba1a44ed is claimed 00:11:34.332 [2024-11-27 04:33:21.878991] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 167e8081-aadd-456c-8c79-e656ba1a44ed (2) smaller than existing raid bdev Raid (3) 00:11:34.332 [2024-11-27 04:33:21.879025] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 4773b3f8-d3de-45cd-9a1d-a6f08995d72d: File exists 00:11:34.332 [2024-11-27 04:33:21.879081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:34.332 [2024-11-27 04:33:21.879100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:34.332 [2024-11-27 04:33:21.879426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:34.332 [2024-11-27 04:33:21.879635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:34.332 [2024-11-27 04:33:21.879650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:11:34.332 [2024-11-27 04:33:21.879852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.332 [2024-11-27 04:33:21.893545] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60285 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60285 ']' 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60285 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.332 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60285 00:11:34.591 killing process with pid 60285 00:11:34.591 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.591 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.591 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60285' 00:11:34.591 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60285 00:11:34.591 [2024-11-27 04:33:21.974542] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.591 04:33:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60285 00:11:34.591 [2024-11-27 04:33:21.974649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.591 [2024-11-27 04:33:21.974726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.591 [2024-11-27 04:33:21.974742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:11:35.967 [2024-11-27 04:33:23.273760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.900 ************************************ 00:11:36.900 END TEST raid1_resize_superblock_test 00:11:36.900 ************************************ 00:11:36.900 04:33:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:11:36.900 00:11:36.900 real 0m4.615s 00:11:36.900 user 0m4.897s 00:11:36.900 sys 0m0.672s 00:11:36.900 04:33:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.900 04:33:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.900 04:33:24 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:11:36.900 04:33:24 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:11:36.900 04:33:24 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:11:36.900 04:33:24 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:11:36.900 04:33:24 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:11:36.900 04:33:24 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:11:36.900 04:33:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.900 04:33:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.900 04:33:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.900 ************************************ 00:11:36.900 START TEST raid_function_test_raid0 00:11:36.900 ************************************ 00:11:36.900 04:33:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:11:36.900 04:33:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:11:36.900 04:33:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:11:36.900 04:33:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:11:36.900 Process raid pid: 60387 00:11:36.900 04:33:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60387 00:11:36.900 04:33:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60387' 00:11:36.900 04:33:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:36.900 04:33:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60387 00:11:36.901 04:33:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60387 ']' 00:11:36.901 04:33:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.901 04:33:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.901 04:33:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.901 04:33:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.901 04:33:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:37.159 [2024-11-27 04:33:24.527809] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:37.159 [2024-11-27 04:33:24.527993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.159 [2024-11-27 04:33:24.719247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.418 [2024-11-27 04:33:24.881163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.675 [2024-11-27 04:33:25.122669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.675 [2024-11-27 04:33:25.122729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.933 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.933 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:11:37.933 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:11:37.933 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.933 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:37.933 Base_1 00:11:37.933 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.933 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:11:37.933 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.933 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:38.192 Base_2 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:38.192 [2024-11-27 04:33:25.582519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:38.192 [2024-11-27 04:33:25.585046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:38.192 [2024-11-27 04:33:25.585270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:38.192 [2024-11-27 04:33:25.585449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:38.192 [2024-11-27 04:33:25.585851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:38.192 [2024-11-27 04:33:25.586189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:38.192 [2024-11-27 04:33:25.586323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:11:38.192 [2024-11-27 04:33:25.586713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:38.192 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:11:38.451 [2024-11-27 04:33:25.942842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.451 /dev/nbd0 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:38.451 04:33:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.451 1+0 records in 00:11:38.451 1+0 records out 00:11:38.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373163 s, 11.0 MB/s 00:11:38.451 04:33:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.451 04:33:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:11:38.451 04:33:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.451 04:33:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:38.451 04:33:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:11:38.451 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.451 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:38.451 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:11:38.451 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:38.451 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:38.710 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:38.710 { 00:11:38.710 "nbd_device": "/dev/nbd0", 00:11:38.710 "bdev_name": "raid" 00:11:38.710 } 00:11:38.710 ]' 00:11:38.710 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:38.710 { 00:11:38.710 "nbd_device": "/dev/nbd0", 00:11:38.710 "bdev_name": "raid" 00:11:38.710 } 00:11:38.710 ]' 00:11:38.710 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:11:38.969 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:11:38.970 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:11:38.970 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:38.970 4096+0 records in 00:11:38.970 4096+0 records out 00:11:38.970 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0412947 s, 50.8 MB/s 00:11:38.970 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:39.229 4096+0 records in 00:11:39.229 4096+0 records out 00:11:39.229 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.338141 s, 6.2 MB/s 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:39.229 128+0 records in 00:11:39.229 128+0 records out 00:11:39.229 65536 bytes (66 kB, 64 KiB) copied, 0.000848449 s, 77.2 MB/s 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:39.229 2035+0 records in 00:11:39.229 2035+0 records out 00:11:39.229 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0121188 s, 86.0 MB/s 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:39.229 456+0 records in 00:11:39.229 456+0 records out 00:11:39.229 233472 bytes (233 kB, 228 KiB) copied, 0.00210142 s, 111 MB/s 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.229 04:33:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:39.796 [2024-11-27 04:33:27.121107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:39.796 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60387 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60387 ']' 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60387 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60387 00:11:40.055 killing process with pid 60387 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60387' 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60387 00:11:40.055 [2024-11-27 04:33:27.500666] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.055 04:33:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60387 00:11:40.055 [2024-11-27 04:33:27.500808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.055 [2024-11-27 04:33:27.500878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.055 [2024-11-27 04:33:27.500902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:11:40.313 [2024-11-27 04:33:27.687214] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.249 ************************************ 00:11:41.249 END TEST raid_function_test_raid0 00:11:41.249 ************************************ 00:11:41.249 04:33:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:11:41.249 00:11:41.249 real 0m4.315s 00:11:41.249 user 0m5.277s 00:11:41.249 sys 0m1.017s 00:11:41.249 04:33:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.249 04:33:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:41.249 04:33:28 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:11:41.249 04:33:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:41.249 04:33:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.249 04:33:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.249 ************************************ 00:11:41.249 START TEST raid_function_test_concat 00:11:41.249 ************************************ 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:11:41.249 Process raid pid: 60522 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60522 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60522' 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60522 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60522 ']' 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.249 04:33:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:41.508 [2024-11-27 04:33:28.881158] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:41.508 [2024-11-27 04:33:28.881522] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.508 [2024-11-27 04:33:29.066953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.765 [2024-11-27 04:33:29.199063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.023 [2024-11-27 04:33:29.405479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.023 [2024-11-27 04:33:29.405518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.283 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.283 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:11:42.283 04:33:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:11:42.283 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.283 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:42.283 Base_1 00:11:42.283 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.283 04:33:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:11:42.283 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.283 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:42.541 Base_2 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:42.541 [2024-11-27 04:33:29.941726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:42.541 [2024-11-27 04:33:29.944344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:42.541 [2024-11-27 04:33:29.944478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:42.541 [2024-11-27 04:33:29.944510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:42.541 [2024-11-27 04:33:29.944962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:42.541 [2024-11-27 04:33:29.945169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:42.541 [2024-11-27 04:33:29.945186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:11:42.541 [2024-11-27 04:33:29.945375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.541 04:33:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:42.541 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:42.541 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:42.541 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:42.541 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:11:42.541 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:42.541 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.541 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:11:42.799 [2024-11-27 04:33:30.273911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:42.799 /dev/nbd0 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.799 1+0 records in 00:11:42.799 1+0 records out 00:11:42.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326044 s, 12.6 MB/s 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.799 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:43.058 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:43.058 { 00:11:43.058 "nbd_device": "/dev/nbd0", 00:11:43.058 "bdev_name": "raid" 00:11:43.058 } 00:11:43.058 ]' 00:11:43.058 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:43.058 { 00:11:43.058 "nbd_device": "/dev/nbd0", 00:11:43.058 "bdev_name": "raid" 00:11:43.058 } 00:11:43.058 ]' 00:11:43.058 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:43.317 4096+0 records in 00:11:43.317 4096+0 records out 00:11:43.317 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.027187 s, 77.1 MB/s 00:11:43.317 04:33:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:43.575 4096+0 records in 00:11:43.575 4096+0 records out 00:11:43.575 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.323464 s, 6.5 MB/s 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:43.576 128+0 records in 00:11:43.576 128+0 records out 00:11:43.576 65536 bytes (66 kB, 64 KiB) copied, 0.0012104 s, 54.1 MB/s 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:43.576 2035+0 records in 00:11:43.576 2035+0 records out 00:11:43.576 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0101427 s, 103 MB/s 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:43.576 456+0 records in 00:11:43.576 456+0 records out 00:11:43.576 233472 bytes (233 kB, 228 KiB) copied, 0.00214766 s, 109 MB/s 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.576 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:43.880 [2024-11-27 04:33:31.491577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.880 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:43.880 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:43.880 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:43.880 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.880 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.880 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:44.138 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:11:44.138 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.138 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:11:44.138 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:44.138 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:44.396 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:44.396 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:44.396 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:44.396 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:44.396 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:44.396 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:44.396 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60522 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60522 ']' 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60522 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60522 00:11:44.397 killing process with pid 60522 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60522' 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60522 00:11:44.397 [2024-11-27 04:33:31.966650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.397 04:33:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60522 00:11:44.397 [2024-11-27 04:33:31.966781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.397 [2024-11-27 04:33:31.966856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.397 [2024-11-27 04:33:31.966876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:11:44.654 [2024-11-27 04:33:32.153400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.029 04:33:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:11:46.029 00:11:46.029 real 0m4.422s 00:11:46.029 user 0m5.470s 00:11:46.029 sys 0m1.038s 00:11:46.029 04:33:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.029 04:33:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:46.029 ************************************ 00:11:46.029 END TEST raid_function_test_concat 00:11:46.029 ************************************ 00:11:46.029 04:33:33 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:11:46.029 04:33:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.029 04:33:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.029 04:33:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.029 ************************************ 00:11:46.029 START TEST raid0_resize_test 00:11:46.029 ************************************ 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:11:46.029 Process raid pid: 60650 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60650 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60650' 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60650 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60650 ']' 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.029 04:33:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.029 [2024-11-27 04:33:33.373192] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:46.029 [2024-11-27 04:33:33.373635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.029 [2024-11-27 04:33:33.560481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.288 [2024-11-27 04:33:33.693874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.288 [2024-11-27 04:33:33.902083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.288 [2024-11-27 04:33:33.902366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.854 Base_1 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.854 Base_2 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.854 [2024-11-27 04:33:34.431030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:46.854 [2024-11-27 04:33:34.433485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:46.854 [2024-11-27 04:33:34.433723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:46.854 [2024-11-27 04:33:34.433756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:46.854 [2024-11-27 04:33:34.434137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:46.854 [2024-11-27 04:33:34.434309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:46.854 [2024-11-27 04:33:34.434324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:46.854 [2024-11-27 04:33:34.434572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.854 [2024-11-27 04:33:34.439001] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:46.854 [2024-11-27 04:33:34.439036] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:46.854 true 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.854 [2024-11-27 04:33:34.451207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.854 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.113 [2024-11-27 04:33:34.491023] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:47.113 [2024-11-27 04:33:34.491059] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:47.113 [2024-11-27 04:33:34.491112] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:11:47.113 true 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.113 [2024-11-27 04:33:34.503226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60650 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60650 ']' 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60650 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60650 00:11:47.113 killing process with pid 60650 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60650' 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60650 00:11:47.113 [2024-11-27 04:33:34.581065] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.113 04:33:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60650 00:11:47.113 [2024-11-27 04:33:34.581180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.113 [2024-11-27 04:33:34.581248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.113 [2024-11-27 04:33:34.581264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:47.113 [2024-11-27 04:33:34.596689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.047 04:33:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:48.047 00:11:48.047 real 0m2.375s 00:11:48.047 user 0m2.656s 00:11:48.047 sys 0m0.392s 00:11:48.047 04:33:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.047 ************************************ 00:11:48.047 END TEST raid0_resize_test 00:11:48.047 ************************************ 00:11:48.047 04:33:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.305 04:33:35 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:11:48.305 04:33:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.305 04:33:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.305 04:33:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.305 ************************************ 00:11:48.305 START TEST raid1_resize_test 00:11:48.305 ************************************ 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60706 00:11:48.305 Process raid pid: 60706 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:48.305 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60706' 00:11:48.306 04:33:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60706 00:11:48.306 04:33:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60706 ']' 00:11:48.306 04:33:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.306 04:33:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.306 04:33:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.306 04:33:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.306 04:33:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.306 [2024-11-27 04:33:35.798859] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:48.306 [2024-11-27 04:33:35.799044] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.564 [2024-11-27 04:33:35.985178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.564 [2024-11-27 04:33:36.117221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.873 [2024-11-27 04:33:36.323016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.874 [2024-11-27 04:33:36.323073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.438 Base_1 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.438 Base_2 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.438 [2024-11-27 04:33:36.790283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:49.438 [2024-11-27 04:33:36.792838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:49.438 [2024-11-27 04:33:36.793063] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:49.438 [2024-11-27 04:33:36.793216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:49.438 [2024-11-27 04:33:36.793595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:49.438 [2024-11-27 04:33:36.793934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:49.438 [2024-11-27 04:33:36.794067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:49.438 [2024-11-27 04:33:36.794511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.438 [2024-11-27 04:33:36.798454] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:49.438 [2024-11-27 04:33:36.798504] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:49.438 true 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.438 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:11:49.438 [2024-11-27 04:33:36.810653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.439 [2024-11-27 04:33:36.870499] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:49.439 [2024-11-27 04:33:36.870643] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:49.439 [2024-11-27 04:33:36.870699] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:11:49.439 true 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:11:49.439 [2024-11-27 04:33:36.882805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60706 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60706 ']' 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60706 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60706 00:11:49.439 killing process with pid 60706 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60706' 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60706 00:11:49.439 [2024-11-27 04:33:36.972376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.439 04:33:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60706 00:11:49.439 [2024-11-27 04:33:36.972480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.439 [2024-11-27 04:33:36.973087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.439 [2024-11-27 04:33:36.973240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:49.439 [2024-11-27 04:33:36.988013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.808 04:33:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:50.808 00:11:50.808 real 0m2.343s 00:11:50.808 user 0m2.608s 00:11:50.808 sys 0m0.371s 00:11:50.808 ************************************ 00:11:50.808 END TEST raid1_resize_test 00:11:50.808 ************************************ 00:11:50.808 04:33:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.808 04:33:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.808 04:33:38 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:50.808 04:33:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:50.808 04:33:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:11:50.808 04:33:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.808 04:33:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.808 04:33:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.808 ************************************ 00:11:50.808 START TEST raid_state_function_test 00:11:50.808 ************************************ 00:11:50.808 04:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:11:50.808 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:50.808 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:50.808 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:50.808 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:50.808 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:50.808 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60774 00:11:50.809 Process raid pid: 60774 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60774' 00:11:50.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60774 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60774 ']' 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.809 04:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.809 [2024-11-27 04:33:38.197958] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:50.809 [2024-11-27 04:33:38.198380] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.809 [2024-11-27 04:33:38.380751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.066 [2024-11-27 04:33:38.515316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.324 [2024-11-27 04:33:38.723811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.324 [2024-11-27 04:33:38.724066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.582 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.582 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.582 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:51.582 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.582 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.582 [2024-11-27 04:33:39.184241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.582 [2024-11-27 04:33:39.184303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.582 [2024-11-27 04:33:39.184321] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.582 [2024-11-27 04:33:39.184338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.582 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.583 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.862 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.862 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.862 "name": "Existed_Raid", 00:11:51.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.862 "strip_size_kb": 64, 00:11:51.862 "state": "configuring", 00:11:51.862 "raid_level": "raid0", 00:11:51.862 "superblock": false, 00:11:51.862 "num_base_bdevs": 2, 00:11:51.862 "num_base_bdevs_discovered": 0, 00:11:51.862 "num_base_bdevs_operational": 2, 00:11:51.862 "base_bdevs_list": [ 00:11:51.862 { 00:11:51.862 "name": "BaseBdev1", 00:11:51.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.862 "is_configured": false, 00:11:51.862 "data_offset": 0, 00:11:51.862 "data_size": 0 00:11:51.862 }, 00:11:51.862 { 00:11:51.862 "name": "BaseBdev2", 00:11:51.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.862 "is_configured": false, 00:11:51.862 "data_offset": 0, 00:11:51.862 "data_size": 0 00:11:51.862 } 00:11:51.862 ] 00:11:51.862 }' 00:11:51.862 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.862 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.120 [2024-11-27 04:33:39.696279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.120 [2024-11-27 04:33:39.696322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.120 [2024-11-27 04:33:39.704251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.120 [2024-11-27 04:33:39.704305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.120 [2024-11-27 04:33:39.704320] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.120 [2024-11-27 04:33:39.704339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.120 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.378 [2024-11-27 04:33:39.749984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.378 BaseBdev1 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.378 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.378 [ 00:11:52.378 { 00:11:52.378 "name": "BaseBdev1", 00:11:52.378 "aliases": [ 00:11:52.378 "c2644071-e003-450f-9d71-b7aa1c557b4b" 00:11:52.378 ], 00:11:52.378 "product_name": "Malloc disk", 00:11:52.378 "block_size": 512, 00:11:52.378 "num_blocks": 65536, 00:11:52.378 "uuid": "c2644071-e003-450f-9d71-b7aa1c557b4b", 00:11:52.378 "assigned_rate_limits": { 00:11:52.378 "rw_ios_per_sec": 0, 00:11:52.378 "rw_mbytes_per_sec": 0, 00:11:52.378 "r_mbytes_per_sec": 0, 00:11:52.378 "w_mbytes_per_sec": 0 00:11:52.378 }, 00:11:52.378 "claimed": true, 00:11:52.378 "claim_type": "exclusive_write", 00:11:52.378 "zoned": false, 00:11:52.378 "supported_io_types": { 00:11:52.378 "read": true, 00:11:52.379 "write": true, 00:11:52.379 "unmap": true, 00:11:52.379 "flush": true, 00:11:52.379 "reset": true, 00:11:52.379 "nvme_admin": false, 00:11:52.379 "nvme_io": false, 00:11:52.379 "nvme_io_md": false, 00:11:52.379 "write_zeroes": true, 00:11:52.379 "zcopy": true, 00:11:52.379 "get_zone_info": false, 00:11:52.379 "zone_management": false, 00:11:52.379 "zone_append": false, 00:11:52.379 "compare": false, 00:11:52.379 "compare_and_write": false, 00:11:52.379 "abort": true, 00:11:52.379 "seek_hole": false, 00:11:52.379 "seek_data": false, 00:11:52.379 "copy": true, 00:11:52.379 "nvme_iov_md": false 00:11:52.379 }, 00:11:52.379 "memory_domains": [ 00:11:52.379 { 00:11:52.379 "dma_device_id": "system", 00:11:52.379 "dma_device_type": 1 00:11:52.379 }, 00:11:52.379 { 00:11:52.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.379 "dma_device_type": 2 00:11:52.379 } 00:11:52.379 ], 00:11:52.379 "driver_specific": {} 00:11:52.379 } 00:11:52.379 ] 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.379 "name": "Existed_Raid", 00:11:52.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.379 "strip_size_kb": 64, 00:11:52.379 "state": "configuring", 00:11:52.379 "raid_level": "raid0", 00:11:52.379 "superblock": false, 00:11:52.379 "num_base_bdevs": 2, 00:11:52.379 "num_base_bdevs_discovered": 1, 00:11:52.379 "num_base_bdevs_operational": 2, 00:11:52.379 "base_bdevs_list": [ 00:11:52.379 { 00:11:52.379 "name": "BaseBdev1", 00:11:52.379 "uuid": "c2644071-e003-450f-9d71-b7aa1c557b4b", 00:11:52.379 "is_configured": true, 00:11:52.379 "data_offset": 0, 00:11:52.379 "data_size": 65536 00:11:52.379 }, 00:11:52.379 { 00:11:52.379 "name": "BaseBdev2", 00:11:52.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.379 "is_configured": false, 00:11:52.379 "data_offset": 0, 00:11:52.379 "data_size": 0 00:11:52.379 } 00:11:52.379 ] 00:11:52.379 }' 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.379 04:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 [2024-11-27 04:33:40.302177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.947 [2024-11-27 04:33:40.302243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 [2024-11-27 04:33:40.310205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.947 [2024-11-27 04:33:40.312740] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.947 [2024-11-27 04:33:40.312928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.947 "name": "Existed_Raid", 00:11:52.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.947 "strip_size_kb": 64, 00:11:52.947 "state": "configuring", 00:11:52.947 "raid_level": "raid0", 00:11:52.947 "superblock": false, 00:11:52.947 "num_base_bdevs": 2, 00:11:52.947 "num_base_bdevs_discovered": 1, 00:11:52.947 "num_base_bdevs_operational": 2, 00:11:52.947 "base_bdevs_list": [ 00:11:52.947 { 00:11:52.947 "name": "BaseBdev1", 00:11:52.947 "uuid": "c2644071-e003-450f-9d71-b7aa1c557b4b", 00:11:52.947 "is_configured": true, 00:11:52.947 "data_offset": 0, 00:11:52.947 "data_size": 65536 00:11:52.947 }, 00:11:52.947 { 00:11:52.947 "name": "BaseBdev2", 00:11:52.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.947 "is_configured": false, 00:11:52.947 "data_offset": 0, 00:11:52.947 "data_size": 0 00:11:52.947 } 00:11:52.947 ] 00:11:52.947 }' 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.947 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.515 [2024-11-27 04:33:40.877142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.515 [2024-11-27 04:33:40.877205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:53.515 [2024-11-27 04:33:40.877220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:53.515 [2024-11-27 04:33:40.877577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:53.515 [2024-11-27 04:33:40.877866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:53.515 [2024-11-27 04:33:40.877890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:53.515 [2024-11-27 04:33:40.878220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.515 BaseBdev2 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.515 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.515 [ 00:11:53.515 { 00:11:53.515 "name": "BaseBdev2", 00:11:53.515 "aliases": [ 00:11:53.515 "62d9e5ce-0565-4ac4-a6e4-55e28a9fd1ab" 00:11:53.515 ], 00:11:53.515 "product_name": "Malloc disk", 00:11:53.515 "block_size": 512, 00:11:53.515 "num_blocks": 65536, 00:11:53.515 "uuid": "62d9e5ce-0565-4ac4-a6e4-55e28a9fd1ab", 00:11:53.515 "assigned_rate_limits": { 00:11:53.515 "rw_ios_per_sec": 0, 00:11:53.515 "rw_mbytes_per_sec": 0, 00:11:53.515 "r_mbytes_per_sec": 0, 00:11:53.515 "w_mbytes_per_sec": 0 00:11:53.515 }, 00:11:53.515 "claimed": true, 00:11:53.515 "claim_type": "exclusive_write", 00:11:53.515 "zoned": false, 00:11:53.515 "supported_io_types": { 00:11:53.515 "read": true, 00:11:53.515 "write": true, 00:11:53.515 "unmap": true, 00:11:53.515 "flush": true, 00:11:53.515 "reset": true, 00:11:53.515 "nvme_admin": false, 00:11:53.515 "nvme_io": false, 00:11:53.515 "nvme_io_md": false, 00:11:53.515 "write_zeroes": true, 00:11:53.515 "zcopy": true, 00:11:53.515 "get_zone_info": false, 00:11:53.515 "zone_management": false, 00:11:53.515 "zone_append": false, 00:11:53.515 "compare": false, 00:11:53.515 "compare_and_write": false, 00:11:53.516 "abort": true, 00:11:53.516 "seek_hole": false, 00:11:53.516 "seek_data": false, 00:11:53.516 "copy": true, 00:11:53.516 "nvme_iov_md": false 00:11:53.516 }, 00:11:53.516 "memory_domains": [ 00:11:53.516 { 00:11:53.516 "dma_device_id": "system", 00:11:53.516 "dma_device_type": 1 00:11:53.516 }, 00:11:53.516 { 00:11:53.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.516 "dma_device_type": 2 00:11:53.516 } 00:11:53.516 ], 00:11:53.516 "driver_specific": {} 00:11:53.516 } 00:11:53.516 ] 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.516 "name": "Existed_Raid", 00:11:53.516 "uuid": "a4f83c36-0516-4359-98f3-1f6c72ec38c6", 00:11:53.516 "strip_size_kb": 64, 00:11:53.516 "state": "online", 00:11:53.516 "raid_level": "raid0", 00:11:53.516 "superblock": false, 00:11:53.516 "num_base_bdevs": 2, 00:11:53.516 "num_base_bdevs_discovered": 2, 00:11:53.516 "num_base_bdevs_operational": 2, 00:11:53.516 "base_bdevs_list": [ 00:11:53.516 { 00:11:53.516 "name": "BaseBdev1", 00:11:53.516 "uuid": "c2644071-e003-450f-9d71-b7aa1c557b4b", 00:11:53.516 "is_configured": true, 00:11:53.516 "data_offset": 0, 00:11:53.516 "data_size": 65536 00:11:53.516 }, 00:11:53.516 { 00:11:53.516 "name": "BaseBdev2", 00:11:53.516 "uuid": "62d9e5ce-0565-4ac4-a6e4-55e28a9fd1ab", 00:11:53.516 "is_configured": true, 00:11:53.516 "data_offset": 0, 00:11:53.516 "data_size": 65536 00:11:53.516 } 00:11:53.516 ] 00:11:53.516 }' 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.516 04:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.082 [2024-11-27 04:33:41.429663] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.082 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.082 "name": "Existed_Raid", 00:11:54.082 "aliases": [ 00:11:54.082 "a4f83c36-0516-4359-98f3-1f6c72ec38c6" 00:11:54.082 ], 00:11:54.082 "product_name": "Raid Volume", 00:11:54.082 "block_size": 512, 00:11:54.082 "num_blocks": 131072, 00:11:54.083 "uuid": "a4f83c36-0516-4359-98f3-1f6c72ec38c6", 00:11:54.083 "assigned_rate_limits": { 00:11:54.083 "rw_ios_per_sec": 0, 00:11:54.083 "rw_mbytes_per_sec": 0, 00:11:54.083 "r_mbytes_per_sec": 0, 00:11:54.083 "w_mbytes_per_sec": 0 00:11:54.083 }, 00:11:54.083 "claimed": false, 00:11:54.083 "zoned": false, 00:11:54.083 "supported_io_types": { 00:11:54.083 "read": true, 00:11:54.083 "write": true, 00:11:54.083 "unmap": true, 00:11:54.083 "flush": true, 00:11:54.083 "reset": true, 00:11:54.083 "nvme_admin": false, 00:11:54.083 "nvme_io": false, 00:11:54.083 "nvme_io_md": false, 00:11:54.083 "write_zeroes": true, 00:11:54.083 "zcopy": false, 00:11:54.083 "get_zone_info": false, 00:11:54.083 "zone_management": false, 00:11:54.083 "zone_append": false, 00:11:54.083 "compare": false, 00:11:54.083 "compare_and_write": false, 00:11:54.083 "abort": false, 00:11:54.083 "seek_hole": false, 00:11:54.083 "seek_data": false, 00:11:54.083 "copy": false, 00:11:54.083 "nvme_iov_md": false 00:11:54.083 }, 00:11:54.083 "memory_domains": [ 00:11:54.083 { 00:11:54.083 "dma_device_id": "system", 00:11:54.083 "dma_device_type": 1 00:11:54.083 }, 00:11:54.083 { 00:11:54.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.083 "dma_device_type": 2 00:11:54.083 }, 00:11:54.083 { 00:11:54.083 "dma_device_id": "system", 00:11:54.083 "dma_device_type": 1 00:11:54.083 }, 00:11:54.083 { 00:11:54.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.083 "dma_device_type": 2 00:11:54.083 } 00:11:54.083 ], 00:11:54.083 "driver_specific": { 00:11:54.083 "raid": { 00:11:54.083 "uuid": "a4f83c36-0516-4359-98f3-1f6c72ec38c6", 00:11:54.083 "strip_size_kb": 64, 00:11:54.083 "state": "online", 00:11:54.083 "raid_level": "raid0", 00:11:54.083 "superblock": false, 00:11:54.083 "num_base_bdevs": 2, 00:11:54.083 "num_base_bdevs_discovered": 2, 00:11:54.083 "num_base_bdevs_operational": 2, 00:11:54.083 "base_bdevs_list": [ 00:11:54.083 { 00:11:54.083 "name": "BaseBdev1", 00:11:54.083 "uuid": "c2644071-e003-450f-9d71-b7aa1c557b4b", 00:11:54.083 "is_configured": true, 00:11:54.083 "data_offset": 0, 00:11:54.083 "data_size": 65536 00:11:54.083 }, 00:11:54.083 { 00:11:54.083 "name": "BaseBdev2", 00:11:54.083 "uuid": "62d9e5ce-0565-4ac4-a6e4-55e28a9fd1ab", 00:11:54.083 "is_configured": true, 00:11:54.083 "data_offset": 0, 00:11:54.083 "data_size": 65536 00:11:54.083 } 00:11:54.083 ] 00:11:54.083 } 00:11:54.083 } 00:11:54.083 }' 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:54.083 BaseBdev2' 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.083 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.083 [2024-11-27 04:33:41.689442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.083 [2024-11-27 04:33:41.689488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.083 [2024-11-27 04:33:41.689565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.342 "name": "Existed_Raid", 00:11:54.342 "uuid": "a4f83c36-0516-4359-98f3-1f6c72ec38c6", 00:11:54.342 "strip_size_kb": 64, 00:11:54.342 "state": "offline", 00:11:54.342 "raid_level": "raid0", 00:11:54.342 "superblock": false, 00:11:54.342 "num_base_bdevs": 2, 00:11:54.342 "num_base_bdevs_discovered": 1, 00:11:54.342 "num_base_bdevs_operational": 1, 00:11:54.342 "base_bdevs_list": [ 00:11:54.342 { 00:11:54.342 "name": null, 00:11:54.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.342 "is_configured": false, 00:11:54.342 "data_offset": 0, 00:11:54.342 "data_size": 65536 00:11:54.342 }, 00:11:54.342 { 00:11:54.342 "name": "BaseBdev2", 00:11:54.342 "uuid": "62d9e5ce-0565-4ac4-a6e4-55e28a9fd1ab", 00:11:54.342 "is_configured": true, 00:11:54.342 "data_offset": 0, 00:11:54.342 "data_size": 65536 00:11:54.342 } 00:11:54.342 ] 00:11:54.342 }' 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.342 04:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.909 [2024-11-27 04:33:42.362036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:54.909 [2024-11-27 04:33:42.362233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:54.909 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60774 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60774 ']' 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60774 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.910 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60774 00:11:55.168 killing process with pid 60774 00:11:55.168 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.168 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.168 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60774' 00:11:55.168 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60774 00:11:55.168 [2024-11-27 04:33:42.537465] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.168 04:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60774 00:11:55.168 [2024-11-27 04:33:42.552260] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:56.222 00:11:56.222 real 0m5.586s 00:11:56.222 user 0m8.448s 00:11:56.222 sys 0m0.751s 00:11:56.222 ************************************ 00:11:56.222 END TEST raid_state_function_test 00:11:56.222 ************************************ 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.222 04:33:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:11:56.222 04:33:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:56.222 04:33:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.222 04:33:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.222 ************************************ 00:11:56.222 START TEST raid_state_function_test_sb 00:11:56.222 ************************************ 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61027 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:56.222 Process raid pid: 61027 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61027' 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61027 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61027 ']' 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.222 04:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.222 [2024-11-27 04:33:43.838004] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:56.222 [2024-11-27 04:33:43.838416] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.480 [2024-11-27 04:33:44.032607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.738 [2024-11-27 04:33:44.191486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.019 [2024-11-27 04:33:44.405810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.019 [2024-11-27 04:33:44.406056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.292 [2024-11-27 04:33:44.820118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.292 [2024-11-27 04:33:44.820214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.292 [2024-11-27 04:33:44.820231] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.292 [2024-11-27 04:33:44.820252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.292 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.293 "name": "Existed_Raid", 00:11:57.293 "uuid": "2bd94662-6de7-45a9-8b47-f5a30a6078aa", 00:11:57.293 "strip_size_kb": 64, 00:11:57.293 "state": "configuring", 00:11:57.293 "raid_level": "raid0", 00:11:57.293 "superblock": true, 00:11:57.293 "num_base_bdevs": 2, 00:11:57.293 "num_base_bdevs_discovered": 0, 00:11:57.293 "num_base_bdevs_operational": 2, 00:11:57.293 "base_bdevs_list": [ 00:11:57.293 { 00:11:57.293 "name": "BaseBdev1", 00:11:57.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.293 "is_configured": false, 00:11:57.293 "data_offset": 0, 00:11:57.293 "data_size": 0 00:11:57.293 }, 00:11:57.293 { 00:11:57.293 "name": "BaseBdev2", 00:11:57.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.293 "is_configured": false, 00:11:57.293 "data_offset": 0, 00:11:57.293 "data_size": 0 00:11:57.293 } 00:11:57.293 ] 00:11:57.293 }' 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.293 04:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.861 [2024-11-27 04:33:45.332224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:57.861 [2024-11-27 04:33:45.332390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.861 [2024-11-27 04:33:45.340179] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.861 [2024-11-27 04:33:45.340233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.861 [2024-11-27 04:33:45.340249] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.861 [2024-11-27 04:33:45.340268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.861 [2024-11-27 04:33:45.385014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.861 BaseBdev1 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.861 [ 00:11:57.861 { 00:11:57.861 "name": "BaseBdev1", 00:11:57.861 "aliases": [ 00:11:57.861 "ec91a18e-8b78-4ee2-844d-c05dcce8c353" 00:11:57.861 ], 00:11:57.861 "product_name": "Malloc disk", 00:11:57.861 "block_size": 512, 00:11:57.861 "num_blocks": 65536, 00:11:57.861 "uuid": "ec91a18e-8b78-4ee2-844d-c05dcce8c353", 00:11:57.861 "assigned_rate_limits": { 00:11:57.861 "rw_ios_per_sec": 0, 00:11:57.861 "rw_mbytes_per_sec": 0, 00:11:57.861 "r_mbytes_per_sec": 0, 00:11:57.861 "w_mbytes_per_sec": 0 00:11:57.861 }, 00:11:57.861 "claimed": true, 00:11:57.861 "claim_type": "exclusive_write", 00:11:57.861 "zoned": false, 00:11:57.861 "supported_io_types": { 00:11:57.861 "read": true, 00:11:57.861 "write": true, 00:11:57.861 "unmap": true, 00:11:57.861 "flush": true, 00:11:57.861 "reset": true, 00:11:57.861 "nvme_admin": false, 00:11:57.861 "nvme_io": false, 00:11:57.861 "nvme_io_md": false, 00:11:57.861 "write_zeroes": true, 00:11:57.861 "zcopy": true, 00:11:57.861 "get_zone_info": false, 00:11:57.861 "zone_management": false, 00:11:57.861 "zone_append": false, 00:11:57.861 "compare": false, 00:11:57.861 "compare_and_write": false, 00:11:57.861 "abort": true, 00:11:57.861 "seek_hole": false, 00:11:57.861 "seek_data": false, 00:11:57.861 "copy": true, 00:11:57.861 "nvme_iov_md": false 00:11:57.861 }, 00:11:57.861 "memory_domains": [ 00:11:57.861 { 00:11:57.861 "dma_device_id": "system", 00:11:57.861 "dma_device_type": 1 00:11:57.861 }, 00:11:57.861 { 00:11:57.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.861 "dma_device_type": 2 00:11:57.861 } 00:11:57.861 ], 00:11:57.861 "driver_specific": {} 00:11:57.861 } 00:11:57.861 ] 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.861 "name": "Existed_Raid", 00:11:57.861 "uuid": "4111d82a-04a7-4025-94db-43b21ae3122d", 00:11:57.861 "strip_size_kb": 64, 00:11:57.861 "state": "configuring", 00:11:57.861 "raid_level": "raid0", 00:11:57.861 "superblock": true, 00:11:57.861 "num_base_bdevs": 2, 00:11:57.861 "num_base_bdevs_discovered": 1, 00:11:57.861 "num_base_bdevs_operational": 2, 00:11:57.861 "base_bdevs_list": [ 00:11:57.861 { 00:11:57.861 "name": "BaseBdev1", 00:11:57.861 "uuid": "ec91a18e-8b78-4ee2-844d-c05dcce8c353", 00:11:57.861 "is_configured": true, 00:11:57.861 "data_offset": 2048, 00:11:57.861 "data_size": 63488 00:11:57.861 }, 00:11:57.861 { 00:11:57.861 "name": "BaseBdev2", 00:11:57.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.861 "is_configured": false, 00:11:57.861 "data_offset": 0, 00:11:57.861 "data_size": 0 00:11:57.861 } 00:11:57.861 ] 00:11:57.861 }' 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.861 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.428 [2024-11-27 04:33:45.917213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:58.428 [2024-11-27 04:33:45.917278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.428 [2024-11-27 04:33:45.925248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.428 [2024-11-27 04:33:45.927695] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:58.428 [2024-11-27 04:33:45.927923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.428 "name": "Existed_Raid", 00:11:58.428 "uuid": "1843a42a-0a74-4634-bfb7-a14367213e2c", 00:11:58.428 "strip_size_kb": 64, 00:11:58.428 "state": "configuring", 00:11:58.428 "raid_level": "raid0", 00:11:58.428 "superblock": true, 00:11:58.428 "num_base_bdevs": 2, 00:11:58.428 "num_base_bdevs_discovered": 1, 00:11:58.428 "num_base_bdevs_operational": 2, 00:11:58.428 "base_bdevs_list": [ 00:11:58.428 { 00:11:58.428 "name": "BaseBdev1", 00:11:58.428 "uuid": "ec91a18e-8b78-4ee2-844d-c05dcce8c353", 00:11:58.428 "is_configured": true, 00:11:58.428 "data_offset": 2048, 00:11:58.428 "data_size": 63488 00:11:58.428 }, 00:11:58.428 { 00:11:58.428 "name": "BaseBdev2", 00:11:58.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.428 "is_configured": false, 00:11:58.428 "data_offset": 0, 00:11:58.428 "data_size": 0 00:11:58.428 } 00:11:58.428 ] 00:11:58.428 }' 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.428 04:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.994 [2024-11-27 04:33:46.488154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.994 [2024-11-27 04:33:46.488746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.994 [2024-11-27 04:33:46.488910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:58.994 BaseBdev2 00:11:58.994 [2024-11-27 04:33:46.489376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:58.994 [2024-11-27 04:33:46.489588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.994 [2024-11-27 04:33:46.489612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:58.994 [2024-11-27 04:33:46.489838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.994 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.994 [ 00:11:58.994 { 00:11:58.994 "name": "BaseBdev2", 00:11:58.994 "aliases": [ 00:11:58.994 "48ee609f-0f2e-464a-af98-805623653362" 00:11:58.994 ], 00:11:58.994 "product_name": "Malloc disk", 00:11:58.994 "block_size": 512, 00:11:58.994 "num_blocks": 65536, 00:11:58.994 "uuid": "48ee609f-0f2e-464a-af98-805623653362", 00:11:58.994 "assigned_rate_limits": { 00:11:58.994 "rw_ios_per_sec": 0, 00:11:58.994 "rw_mbytes_per_sec": 0, 00:11:58.994 "r_mbytes_per_sec": 0, 00:11:58.994 "w_mbytes_per_sec": 0 00:11:58.994 }, 00:11:58.994 "claimed": true, 00:11:58.994 "claim_type": "exclusive_write", 00:11:58.994 "zoned": false, 00:11:58.994 "supported_io_types": { 00:11:58.994 "read": true, 00:11:58.994 "write": true, 00:11:58.994 "unmap": true, 00:11:58.994 "flush": true, 00:11:58.994 "reset": true, 00:11:58.994 "nvme_admin": false, 00:11:58.994 "nvme_io": false, 00:11:58.994 "nvme_io_md": false, 00:11:58.994 "write_zeroes": true, 00:11:58.994 "zcopy": true, 00:11:58.994 "get_zone_info": false, 00:11:58.994 "zone_management": false, 00:11:58.994 "zone_append": false, 00:11:58.994 "compare": false, 00:11:58.995 "compare_and_write": false, 00:11:58.995 "abort": true, 00:11:58.995 "seek_hole": false, 00:11:58.995 "seek_data": false, 00:11:58.995 "copy": true, 00:11:58.995 "nvme_iov_md": false 00:11:58.995 }, 00:11:58.995 "memory_domains": [ 00:11:58.995 { 00:11:58.995 "dma_device_id": "system", 00:11:58.995 "dma_device_type": 1 00:11:58.995 }, 00:11:58.995 { 00:11:58.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.995 "dma_device_type": 2 00:11:58.995 } 00:11:58.995 ], 00:11:58.995 "driver_specific": {} 00:11:58.995 } 00:11:58.995 ] 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.995 "name": "Existed_Raid", 00:11:58.995 "uuid": "1843a42a-0a74-4634-bfb7-a14367213e2c", 00:11:58.995 "strip_size_kb": 64, 00:11:58.995 "state": "online", 00:11:58.995 "raid_level": "raid0", 00:11:58.995 "superblock": true, 00:11:58.995 "num_base_bdevs": 2, 00:11:58.995 "num_base_bdevs_discovered": 2, 00:11:58.995 "num_base_bdevs_operational": 2, 00:11:58.995 "base_bdevs_list": [ 00:11:58.995 { 00:11:58.995 "name": "BaseBdev1", 00:11:58.995 "uuid": "ec91a18e-8b78-4ee2-844d-c05dcce8c353", 00:11:58.995 "is_configured": true, 00:11:58.995 "data_offset": 2048, 00:11:58.995 "data_size": 63488 00:11:58.995 }, 00:11:58.995 { 00:11:58.995 "name": "BaseBdev2", 00:11:58.995 "uuid": "48ee609f-0f2e-464a-af98-805623653362", 00:11:58.995 "is_configured": true, 00:11:58.995 "data_offset": 2048, 00:11:58.995 "data_size": 63488 00:11:58.995 } 00:11:58.995 ] 00:11:58.995 }' 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.995 04:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.562 [2024-11-27 04:33:47.044677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.562 "name": "Existed_Raid", 00:11:59.562 "aliases": [ 00:11:59.562 "1843a42a-0a74-4634-bfb7-a14367213e2c" 00:11:59.562 ], 00:11:59.562 "product_name": "Raid Volume", 00:11:59.562 "block_size": 512, 00:11:59.562 "num_blocks": 126976, 00:11:59.562 "uuid": "1843a42a-0a74-4634-bfb7-a14367213e2c", 00:11:59.562 "assigned_rate_limits": { 00:11:59.562 "rw_ios_per_sec": 0, 00:11:59.562 "rw_mbytes_per_sec": 0, 00:11:59.562 "r_mbytes_per_sec": 0, 00:11:59.562 "w_mbytes_per_sec": 0 00:11:59.562 }, 00:11:59.562 "claimed": false, 00:11:59.562 "zoned": false, 00:11:59.562 "supported_io_types": { 00:11:59.562 "read": true, 00:11:59.562 "write": true, 00:11:59.562 "unmap": true, 00:11:59.562 "flush": true, 00:11:59.562 "reset": true, 00:11:59.562 "nvme_admin": false, 00:11:59.562 "nvme_io": false, 00:11:59.562 "nvme_io_md": false, 00:11:59.562 "write_zeroes": true, 00:11:59.562 "zcopy": false, 00:11:59.562 "get_zone_info": false, 00:11:59.562 "zone_management": false, 00:11:59.562 "zone_append": false, 00:11:59.562 "compare": false, 00:11:59.562 "compare_and_write": false, 00:11:59.562 "abort": false, 00:11:59.562 "seek_hole": false, 00:11:59.562 "seek_data": false, 00:11:59.562 "copy": false, 00:11:59.562 "nvme_iov_md": false 00:11:59.562 }, 00:11:59.562 "memory_domains": [ 00:11:59.562 { 00:11:59.562 "dma_device_id": "system", 00:11:59.562 "dma_device_type": 1 00:11:59.562 }, 00:11:59.562 { 00:11:59.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.562 "dma_device_type": 2 00:11:59.562 }, 00:11:59.562 { 00:11:59.562 "dma_device_id": "system", 00:11:59.562 "dma_device_type": 1 00:11:59.562 }, 00:11:59.562 { 00:11:59.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.562 "dma_device_type": 2 00:11:59.562 } 00:11:59.562 ], 00:11:59.562 "driver_specific": { 00:11:59.562 "raid": { 00:11:59.562 "uuid": "1843a42a-0a74-4634-bfb7-a14367213e2c", 00:11:59.562 "strip_size_kb": 64, 00:11:59.562 "state": "online", 00:11:59.562 "raid_level": "raid0", 00:11:59.562 "superblock": true, 00:11:59.562 "num_base_bdevs": 2, 00:11:59.562 "num_base_bdevs_discovered": 2, 00:11:59.562 "num_base_bdevs_operational": 2, 00:11:59.562 "base_bdevs_list": [ 00:11:59.562 { 00:11:59.562 "name": "BaseBdev1", 00:11:59.562 "uuid": "ec91a18e-8b78-4ee2-844d-c05dcce8c353", 00:11:59.562 "is_configured": true, 00:11:59.562 "data_offset": 2048, 00:11:59.562 "data_size": 63488 00:11:59.562 }, 00:11:59.562 { 00:11:59.562 "name": "BaseBdev2", 00:11:59.562 "uuid": "48ee609f-0f2e-464a-af98-805623653362", 00:11:59.562 "is_configured": true, 00:11:59.562 "data_offset": 2048, 00:11:59.562 "data_size": 63488 00:11:59.562 } 00:11:59.562 ] 00:11:59.562 } 00:11:59.562 } 00:11:59.562 }' 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:59.562 BaseBdev2' 00:11:59.562 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.821 [2024-11-27 04:33:47.300446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.821 [2024-11-27 04:33:47.300490] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.821 [2024-11-27 04:33:47.300557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.821 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.079 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.079 "name": "Existed_Raid", 00:12:00.079 "uuid": "1843a42a-0a74-4634-bfb7-a14367213e2c", 00:12:00.079 "strip_size_kb": 64, 00:12:00.079 "state": "offline", 00:12:00.079 "raid_level": "raid0", 00:12:00.079 "superblock": true, 00:12:00.079 "num_base_bdevs": 2, 00:12:00.079 "num_base_bdevs_discovered": 1, 00:12:00.079 "num_base_bdevs_operational": 1, 00:12:00.079 "base_bdevs_list": [ 00:12:00.079 { 00:12:00.079 "name": null, 00:12:00.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.079 "is_configured": false, 00:12:00.079 "data_offset": 0, 00:12:00.079 "data_size": 63488 00:12:00.079 }, 00:12:00.079 { 00:12:00.079 "name": "BaseBdev2", 00:12:00.079 "uuid": "48ee609f-0f2e-464a-af98-805623653362", 00:12:00.079 "is_configured": true, 00:12:00.079 "data_offset": 2048, 00:12:00.079 "data_size": 63488 00:12:00.079 } 00:12:00.079 ] 00:12:00.079 }' 00:12:00.079 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.079 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.337 04:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.337 [2024-11-27 04:33:47.926108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:00.337 [2024-11-27 04:33:47.926341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61027 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61027 ']' 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61027 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61027 00:12:00.595 killing process with pid 61027 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61027' 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61027 00:12:00.595 [2024-11-27 04:33:48.108293] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.595 04:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61027 00:12:00.595 [2024-11-27 04:33:48.122970] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.968 04:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:01.968 00:12:01.968 real 0m5.438s 00:12:01.968 user 0m8.214s 00:12:01.968 sys 0m0.758s 00:12:01.968 04:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.968 ************************************ 00:12:01.968 END TEST raid_state_function_test_sb 00:12:01.968 ************************************ 00:12:01.968 04:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.968 04:33:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:12:01.968 04:33:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:01.968 04:33:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.968 04:33:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.968 ************************************ 00:12:01.968 START TEST raid_superblock_test 00:12:01.968 ************************************ 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:01.968 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61285 00:12:01.969 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:01.969 04:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61285 00:12:01.969 04:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61285 ']' 00:12:01.969 04:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.969 04:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.969 04:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.969 04:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.969 04:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.969 [2024-11-27 04:33:49.320144] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:01.969 [2024-11-27 04:33:49.320549] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61285 ] 00:12:01.969 [2024-11-27 04:33:49.502829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.227 [2024-11-27 04:33:49.637319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.227 [2024-11-27 04:33:49.845451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.227 [2024-11-27 04:33:49.845534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.841 malloc1 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.841 [2024-11-27 04:33:50.387252] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:02.841 [2024-11-27 04:33:50.387509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.841 [2024-11-27 04:33:50.387616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:02.841 [2024-11-27 04:33:50.387891] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.841 [2024-11-27 04:33:50.391680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.841 [2024-11-27 04:33:50.391922] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:02.841 pt1 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:02.841 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.842 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.842 malloc2 00:12:02.842 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.842 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:02.842 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.842 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.842 [2024-11-27 04:33:50.458331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:02.842 [2024-11-27 04:33:50.458401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.842 [2024-11-27 04:33:50.458439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:02.842 [2024-11-27 04:33:50.458453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.842 [2024-11-27 04:33:50.461299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.842 [2024-11-27 04:33:50.461345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:03.100 pt2 00:12:03.100 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.100 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 [2024-11-27 04:33:50.470403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:03.101 [2024-11-27 04:33:50.472912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:03.101 [2024-11-27 04:33:50.473123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:03.101 [2024-11-27 04:33:50.473143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:03.101 [2024-11-27 04:33:50.473459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:03.101 [2024-11-27 04:33:50.473651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:03.101 [2024-11-27 04:33:50.473670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:03.101 [2024-11-27 04:33:50.473935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.101 "name": "raid_bdev1", 00:12:03.101 "uuid": "9b9c02fd-0dbc-4f5e-b963-fd59827b17ba", 00:12:03.101 "strip_size_kb": 64, 00:12:03.101 "state": "online", 00:12:03.101 "raid_level": "raid0", 00:12:03.101 "superblock": true, 00:12:03.101 "num_base_bdevs": 2, 00:12:03.101 "num_base_bdevs_discovered": 2, 00:12:03.101 "num_base_bdevs_operational": 2, 00:12:03.101 "base_bdevs_list": [ 00:12:03.101 { 00:12:03.101 "name": "pt1", 00:12:03.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.101 "is_configured": true, 00:12:03.101 "data_offset": 2048, 00:12:03.101 "data_size": 63488 00:12:03.101 }, 00:12:03.101 { 00:12:03.101 "name": "pt2", 00:12:03.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.101 "is_configured": true, 00:12:03.101 "data_offset": 2048, 00:12:03.101 "data_size": 63488 00:12:03.101 } 00:12:03.101 ] 00:12:03.101 }' 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.101 04:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.668 04:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.668 [2024-11-27 04:33:51.010913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.668 "name": "raid_bdev1", 00:12:03.668 "aliases": [ 00:12:03.668 "9b9c02fd-0dbc-4f5e-b963-fd59827b17ba" 00:12:03.668 ], 00:12:03.668 "product_name": "Raid Volume", 00:12:03.668 "block_size": 512, 00:12:03.668 "num_blocks": 126976, 00:12:03.668 "uuid": "9b9c02fd-0dbc-4f5e-b963-fd59827b17ba", 00:12:03.668 "assigned_rate_limits": { 00:12:03.668 "rw_ios_per_sec": 0, 00:12:03.668 "rw_mbytes_per_sec": 0, 00:12:03.668 "r_mbytes_per_sec": 0, 00:12:03.668 "w_mbytes_per_sec": 0 00:12:03.668 }, 00:12:03.668 "claimed": false, 00:12:03.668 "zoned": false, 00:12:03.668 "supported_io_types": { 00:12:03.668 "read": true, 00:12:03.668 "write": true, 00:12:03.668 "unmap": true, 00:12:03.668 "flush": true, 00:12:03.668 "reset": true, 00:12:03.668 "nvme_admin": false, 00:12:03.668 "nvme_io": false, 00:12:03.668 "nvme_io_md": false, 00:12:03.668 "write_zeroes": true, 00:12:03.668 "zcopy": false, 00:12:03.668 "get_zone_info": false, 00:12:03.668 "zone_management": false, 00:12:03.668 "zone_append": false, 00:12:03.668 "compare": false, 00:12:03.668 "compare_and_write": false, 00:12:03.668 "abort": false, 00:12:03.668 "seek_hole": false, 00:12:03.668 "seek_data": false, 00:12:03.668 "copy": false, 00:12:03.668 "nvme_iov_md": false 00:12:03.668 }, 00:12:03.668 "memory_domains": [ 00:12:03.668 { 00:12:03.668 "dma_device_id": "system", 00:12:03.668 "dma_device_type": 1 00:12:03.668 }, 00:12:03.668 { 00:12:03.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.668 "dma_device_type": 2 00:12:03.668 }, 00:12:03.668 { 00:12:03.668 "dma_device_id": "system", 00:12:03.668 "dma_device_type": 1 00:12:03.668 }, 00:12:03.668 { 00:12:03.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.668 "dma_device_type": 2 00:12:03.668 } 00:12:03.668 ], 00:12:03.668 "driver_specific": { 00:12:03.668 "raid": { 00:12:03.668 "uuid": "9b9c02fd-0dbc-4f5e-b963-fd59827b17ba", 00:12:03.668 "strip_size_kb": 64, 00:12:03.668 "state": "online", 00:12:03.668 "raid_level": "raid0", 00:12:03.668 "superblock": true, 00:12:03.668 "num_base_bdevs": 2, 00:12:03.668 "num_base_bdevs_discovered": 2, 00:12:03.668 "num_base_bdevs_operational": 2, 00:12:03.668 "base_bdevs_list": [ 00:12:03.668 { 00:12:03.668 "name": "pt1", 00:12:03.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.668 "is_configured": true, 00:12:03.668 "data_offset": 2048, 00:12:03.668 "data_size": 63488 00:12:03.668 }, 00:12:03.668 { 00:12:03.668 "name": "pt2", 00:12:03.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.668 "is_configured": true, 00:12:03.668 "data_offset": 2048, 00:12:03.668 "data_size": 63488 00:12:03.668 } 00:12:03.668 ] 00:12:03.668 } 00:12:03.668 } 00:12:03.668 }' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:03.668 pt2' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.668 [2024-11-27 04:33:51.246960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.668 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9b9c02fd-0dbc-4f5e-b963-fd59827b17ba 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9b9c02fd-0dbc-4f5e-b963-fd59827b17ba ']' 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.927 [2024-11-27 04:33:51.306551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.927 [2024-11-27 04:33:51.306742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.927 [2024-11-27 04:33:51.306890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.927 [2024-11-27 04:33:51.306961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.927 [2024-11-27 04:33:51.306983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:03.927 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.928 [2024-11-27 04:33:51.450603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:03.928 [2024-11-27 04:33:51.453218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:03.928 [2024-11-27 04:33:51.453311] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:03.928 [2024-11-27 04:33:51.453389] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:03.928 [2024-11-27 04:33:51.453417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.928 [2024-11-27 04:33:51.453437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:03.928 request: 00:12:03.928 { 00:12:03.928 "name": "raid_bdev1", 00:12:03.928 "raid_level": "raid0", 00:12:03.928 "base_bdevs": [ 00:12:03.928 "malloc1", 00:12:03.928 "malloc2" 00:12:03.928 ], 00:12:03.928 "strip_size_kb": 64, 00:12:03.928 "superblock": false, 00:12:03.928 "method": "bdev_raid_create", 00:12:03.928 "req_id": 1 00:12:03.928 } 00:12:03.928 Got JSON-RPC error response 00:12:03.928 response: 00:12:03.928 { 00:12:03.928 "code": -17, 00:12:03.928 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:03.928 } 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.928 [2024-11-27 04:33:51.514596] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:03.928 [2024-11-27 04:33:51.514814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.928 [2024-11-27 04:33:51.514948] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:03.928 [2024-11-27 04:33:51.515065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.928 [2024-11-27 04:33:51.518165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.928 [2024-11-27 04:33:51.518329] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:03.928 [2024-11-27 04:33:51.518538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:03.928 [2024-11-27 04:33:51.518717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:03.928 pt1 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.928 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.187 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.187 "name": "raid_bdev1", 00:12:04.187 "uuid": "9b9c02fd-0dbc-4f5e-b963-fd59827b17ba", 00:12:04.187 "strip_size_kb": 64, 00:12:04.187 "state": "configuring", 00:12:04.187 "raid_level": "raid0", 00:12:04.187 "superblock": true, 00:12:04.187 "num_base_bdevs": 2, 00:12:04.187 "num_base_bdevs_discovered": 1, 00:12:04.187 "num_base_bdevs_operational": 2, 00:12:04.187 "base_bdevs_list": [ 00:12:04.187 { 00:12:04.187 "name": "pt1", 00:12:04.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.187 "is_configured": true, 00:12:04.187 "data_offset": 2048, 00:12:04.187 "data_size": 63488 00:12:04.187 }, 00:12:04.187 { 00:12:04.187 "name": null, 00:12:04.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.187 "is_configured": false, 00:12:04.187 "data_offset": 2048, 00:12:04.187 "data_size": 63488 00:12:04.187 } 00:12:04.187 ] 00:12:04.187 }' 00:12:04.187 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.187 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.445 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:04.445 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:04.445 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:04.445 04:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:04.445 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.445 04:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.445 [2024-11-27 04:33:51.998848] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:04.445 [2024-11-27 04:33:51.999086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.445 [2024-11-27 04:33:51.999128] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:04.445 [2024-11-27 04:33:51.999147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.445 [2024-11-27 04:33:51.999758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.445 [2024-11-27 04:33:51.999790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:04.445 [2024-11-27 04:33:51.999913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:04.445 [2024-11-27 04:33:51.999957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.445 [2024-11-27 04:33:52.000103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:04.445 [2024-11-27 04:33:52.000125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:04.445 [2024-11-27 04:33:52.000494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:04.445 [2024-11-27 04:33:52.000703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:04.445 [2024-11-27 04:33:52.000717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:04.445 [2024-11-27 04:33:52.000904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.445 pt2 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.445 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.445 "name": "raid_bdev1", 00:12:04.445 "uuid": "9b9c02fd-0dbc-4f5e-b963-fd59827b17ba", 00:12:04.446 "strip_size_kb": 64, 00:12:04.446 "state": "online", 00:12:04.446 "raid_level": "raid0", 00:12:04.446 "superblock": true, 00:12:04.446 "num_base_bdevs": 2, 00:12:04.446 "num_base_bdevs_discovered": 2, 00:12:04.446 "num_base_bdevs_operational": 2, 00:12:04.446 "base_bdevs_list": [ 00:12:04.446 { 00:12:04.446 "name": "pt1", 00:12:04.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.446 "is_configured": true, 00:12:04.446 "data_offset": 2048, 00:12:04.446 "data_size": 63488 00:12:04.446 }, 00:12:04.446 { 00:12:04.446 "name": "pt2", 00:12:04.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.446 "is_configured": true, 00:12:04.446 "data_offset": 2048, 00:12:04.446 "data_size": 63488 00:12:04.446 } 00:12:04.446 ] 00:12:04.446 }' 00:12:04.446 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.446 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.012 [2024-11-27 04:33:52.555296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.012 "name": "raid_bdev1", 00:12:05.012 "aliases": [ 00:12:05.012 "9b9c02fd-0dbc-4f5e-b963-fd59827b17ba" 00:12:05.012 ], 00:12:05.012 "product_name": "Raid Volume", 00:12:05.012 "block_size": 512, 00:12:05.012 "num_blocks": 126976, 00:12:05.012 "uuid": "9b9c02fd-0dbc-4f5e-b963-fd59827b17ba", 00:12:05.012 "assigned_rate_limits": { 00:12:05.012 "rw_ios_per_sec": 0, 00:12:05.012 "rw_mbytes_per_sec": 0, 00:12:05.012 "r_mbytes_per_sec": 0, 00:12:05.012 "w_mbytes_per_sec": 0 00:12:05.012 }, 00:12:05.012 "claimed": false, 00:12:05.012 "zoned": false, 00:12:05.012 "supported_io_types": { 00:12:05.012 "read": true, 00:12:05.012 "write": true, 00:12:05.012 "unmap": true, 00:12:05.012 "flush": true, 00:12:05.012 "reset": true, 00:12:05.012 "nvme_admin": false, 00:12:05.012 "nvme_io": false, 00:12:05.012 "nvme_io_md": false, 00:12:05.012 "write_zeroes": true, 00:12:05.012 "zcopy": false, 00:12:05.012 "get_zone_info": false, 00:12:05.012 "zone_management": false, 00:12:05.012 "zone_append": false, 00:12:05.012 "compare": false, 00:12:05.012 "compare_and_write": false, 00:12:05.012 "abort": false, 00:12:05.012 "seek_hole": false, 00:12:05.012 "seek_data": false, 00:12:05.012 "copy": false, 00:12:05.012 "nvme_iov_md": false 00:12:05.012 }, 00:12:05.012 "memory_domains": [ 00:12:05.012 { 00:12:05.012 "dma_device_id": "system", 00:12:05.012 "dma_device_type": 1 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.012 "dma_device_type": 2 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "dma_device_id": "system", 00:12:05.012 "dma_device_type": 1 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.012 "dma_device_type": 2 00:12:05.012 } 00:12:05.012 ], 00:12:05.012 "driver_specific": { 00:12:05.012 "raid": { 00:12:05.012 "uuid": "9b9c02fd-0dbc-4f5e-b963-fd59827b17ba", 00:12:05.012 "strip_size_kb": 64, 00:12:05.012 "state": "online", 00:12:05.012 "raid_level": "raid0", 00:12:05.012 "superblock": true, 00:12:05.012 "num_base_bdevs": 2, 00:12:05.012 "num_base_bdevs_discovered": 2, 00:12:05.012 "num_base_bdevs_operational": 2, 00:12:05.012 "base_bdevs_list": [ 00:12:05.012 { 00:12:05.012 "name": "pt1", 00:12:05.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.012 "is_configured": true, 00:12:05.012 "data_offset": 2048, 00:12:05.012 "data_size": 63488 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "name": "pt2", 00:12:05.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.012 "is_configured": true, 00:12:05.012 "data_offset": 2048, 00:12:05.012 "data_size": 63488 00:12:05.012 } 00:12:05.012 ] 00:12:05.012 } 00:12:05.012 } 00:12:05.012 }' 00:12:05.012 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:05.271 pt2' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.271 [2024-11-27 04:33:52.815311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9b9c02fd-0dbc-4f5e-b963-fd59827b17ba '!=' 9b9c02fd-0dbc-4f5e-b963-fd59827b17ba ']' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61285 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61285 ']' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61285 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.271 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61285 00:12:05.530 killing process with pid 61285 00:12:05.530 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.530 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.530 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61285' 00:12:05.530 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61285 00:12:05.530 [2024-11-27 04:33:52.915375] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.530 04:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61285 00:12:05.530 [2024-11-27 04:33:52.916825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.530 [2024-11-27 04:33:52.916909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.530 [2024-11-27 04:33:52.916931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:05.530 [2024-11-27 04:33:53.104748] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.918 04:33:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:06.918 00:12:06.918 real 0m4.952s 00:12:06.918 user 0m7.318s 00:12:06.918 sys 0m0.686s 00:12:06.918 04:33:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.918 04:33:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.918 ************************************ 00:12:06.918 END TEST raid_superblock_test 00:12:06.918 ************************************ 00:12:06.918 04:33:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:12:06.918 04:33:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:06.918 04:33:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.918 04:33:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.918 ************************************ 00:12:06.918 START TEST raid_read_error_test 00:12:06.918 ************************************ 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.niyso56lOe 00:12:06.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61502 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61502 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61502 ']' 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.918 04:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.918 [2024-11-27 04:33:54.351114] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:06.918 [2024-11-27 04:33:54.351502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61502 ] 00:12:07.176 [2024-11-27 04:33:54.542178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.176 [2024-11-27 04:33:54.699833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.434 [2024-11-27 04:33:54.913494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.434 [2024-11-27 04:33:54.913737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.000 BaseBdev1_malloc 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.000 true 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.000 [2024-11-27 04:33:55.427006] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:08.000 [2024-11-27 04:33:55.427099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.000 [2024-11-27 04:33:55.427135] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:08.000 [2024-11-27 04:33:55.427153] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.000 [2024-11-27 04:33:55.430111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.000 [2024-11-27 04:33:55.430296] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:08.000 BaseBdev1 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.000 BaseBdev2_malloc 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.000 true 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.000 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.000 [2024-11-27 04:33:55.491379] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:08.000 [2024-11-27 04:33:55.491468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.000 [2024-11-27 04:33:55.491498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:08.000 [2024-11-27 04:33:55.491516] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.000 [2024-11-27 04:33:55.494476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.000 [2024-11-27 04:33:55.494530] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:08.000 BaseBdev2 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.001 [2024-11-27 04:33:55.503506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.001 [2024-11-27 04:33:55.506208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.001 [2024-11-27 04:33:55.506481] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:08.001 [2024-11-27 04:33:55.506509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:08.001 [2024-11-27 04:33:55.506885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:08.001 [2024-11-27 04:33:55.507125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:08.001 [2024-11-27 04:33:55.507154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:08.001 [2024-11-27 04:33:55.507444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.001 "name": "raid_bdev1", 00:12:08.001 "uuid": "0289cb86-1a3a-4cef-8845-9b412ba83259", 00:12:08.001 "strip_size_kb": 64, 00:12:08.001 "state": "online", 00:12:08.001 "raid_level": "raid0", 00:12:08.001 "superblock": true, 00:12:08.001 "num_base_bdevs": 2, 00:12:08.001 "num_base_bdevs_discovered": 2, 00:12:08.001 "num_base_bdevs_operational": 2, 00:12:08.001 "base_bdevs_list": [ 00:12:08.001 { 00:12:08.001 "name": "BaseBdev1", 00:12:08.001 "uuid": "9d058eef-c264-5e2f-a60e-82757f17f629", 00:12:08.001 "is_configured": true, 00:12:08.001 "data_offset": 2048, 00:12:08.001 "data_size": 63488 00:12:08.001 }, 00:12:08.001 { 00:12:08.001 "name": "BaseBdev2", 00:12:08.001 "uuid": "d7933a6e-7987-5697-b343-b5b819fcd6b4", 00:12:08.001 "is_configured": true, 00:12:08.001 "data_offset": 2048, 00:12:08.001 "data_size": 63488 00:12:08.001 } 00:12:08.001 ] 00:12:08.001 }' 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.001 04:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.566 04:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:08.566 04:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:08.567 [2024-11-27 04:33:56.137067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.499 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.500 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.500 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.500 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.500 "name": "raid_bdev1", 00:12:09.500 "uuid": "0289cb86-1a3a-4cef-8845-9b412ba83259", 00:12:09.500 "strip_size_kb": 64, 00:12:09.500 "state": "online", 00:12:09.500 "raid_level": "raid0", 00:12:09.500 "superblock": true, 00:12:09.500 "num_base_bdevs": 2, 00:12:09.500 "num_base_bdevs_discovered": 2, 00:12:09.500 "num_base_bdevs_operational": 2, 00:12:09.500 "base_bdevs_list": [ 00:12:09.500 { 00:12:09.500 "name": "BaseBdev1", 00:12:09.500 "uuid": "9d058eef-c264-5e2f-a60e-82757f17f629", 00:12:09.500 "is_configured": true, 00:12:09.500 "data_offset": 2048, 00:12:09.500 "data_size": 63488 00:12:09.500 }, 00:12:09.500 { 00:12:09.500 "name": "BaseBdev2", 00:12:09.500 "uuid": "d7933a6e-7987-5697-b343-b5b819fcd6b4", 00:12:09.500 "is_configured": true, 00:12:09.500 "data_offset": 2048, 00:12:09.500 "data_size": 63488 00:12:09.500 } 00:12:09.500 ] 00:12:09.500 }' 00:12:09.500 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.500 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.065 [2024-11-27 04:33:57.526827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.065 [2024-11-27 04:33:57.527006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.065 [2024-11-27 04:33:57.530567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.065 { 00:12:10.065 "results": [ 00:12:10.065 { 00:12:10.065 "job": "raid_bdev1", 00:12:10.065 "core_mask": "0x1", 00:12:10.065 "workload": "randrw", 00:12:10.065 "percentage": 50, 00:12:10.065 "status": "finished", 00:12:10.065 "queue_depth": 1, 00:12:10.065 "io_size": 131072, 00:12:10.065 "runtime": 1.387546, 00:12:10.065 "iops": 10321.819961284167, 00:12:10.065 "mibps": 1290.227495160521, 00:12:10.065 "io_failed": 1, 00:12:10.065 "io_timeout": 0, 00:12:10.065 "avg_latency_us": 135.14148407202657, 00:12:10.065 "min_latency_us": 43.52, 00:12:10.065 "max_latency_us": 1846.9236363636364 00:12:10.065 } 00:12:10.065 ], 00:12:10.065 "core_count": 1 00:12:10.065 } 00:12:10.065 [2024-11-27 04:33:57.530750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.065 [2024-11-27 04:33:57.530822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.065 [2024-11-27 04:33:57.530844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61502 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61502 ']' 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61502 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61502 00:12:10.065 killing process with pid 61502 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61502' 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61502 00:12:10.065 [2024-11-27 04:33:57.568520] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.065 04:33:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61502 00:12:10.323 [2024-11-27 04:33:57.691695] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.258 04:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.niyso56lOe 00:12:11.258 04:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:11.258 04:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:11.258 04:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:11.258 04:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:11.258 04:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:11.258 04:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:11.258 04:33:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:11.258 ************************************ 00:12:11.258 END TEST raid_read_error_test 00:12:11.258 ************************************ 00:12:11.258 00:12:11.258 real 0m4.610s 00:12:11.258 user 0m5.771s 00:12:11.258 sys 0m0.569s 00:12:11.258 04:33:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.258 04:33:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.258 04:33:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:12:11.258 04:33:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:11.258 04:33:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.258 04:33:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 ************************************ 00:12:11.517 START TEST raid_write_error_test 00:12:11.517 ************************************ 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:11.517 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tO7rrhxRkJ 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61642 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61642 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61642 ']' 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.518 04:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.518 [2024-11-27 04:33:58.990586] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:11.518 [2024-11-27 04:33:58.990987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61642 ] 00:12:11.777 [2024-11-27 04:33:59.164341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.777 [2024-11-27 04:33:59.298057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.035 [2024-11-27 04:33:59.503930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.035 [2024-11-27 04:33:59.504010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.602 04:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.602 04:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:12.602 04:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.602 04:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:12.602 04:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.602 04:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.602 BaseBdev1_malloc 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.602 true 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.602 [2024-11-27 04:34:00.049672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:12.602 [2024-11-27 04:34:00.049954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.602 [2024-11-27 04:34:00.050118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:12.602 [2024-11-27 04:34:00.050243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.602 [2024-11-27 04:34:00.053242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.602 [2024-11-27 04:34:00.053413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:12.602 BaseBdev1 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.602 BaseBdev2_malloc 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.602 true 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.602 [2024-11-27 04:34:00.119348] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:12.602 [2024-11-27 04:34:00.119538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.602 [2024-11-27 04:34:00.119609] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:12.602 [2024-11-27 04:34:00.119820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.602 [2024-11-27 04:34:00.122725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.602 [2024-11-27 04:34:00.122790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:12.602 BaseBdev2 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.602 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.602 [2024-11-27 04:34:00.131436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.602 [2024-11-27 04:34:00.133912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.602 [2024-11-27 04:34:00.134181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:12.602 [2024-11-27 04:34:00.134224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:12.602 [2024-11-27 04:34:00.134552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:12.603 [2024-11-27 04:34:00.134804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:12.603 [2024-11-27 04:34:00.134826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:12.603 [2024-11-27 04:34:00.135047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.603 "name": "raid_bdev1", 00:12:12.603 "uuid": "c649f55f-bd67-4202-8ea4-d988ec32aed3", 00:12:12.603 "strip_size_kb": 64, 00:12:12.603 "state": "online", 00:12:12.603 "raid_level": "raid0", 00:12:12.603 "superblock": true, 00:12:12.603 "num_base_bdevs": 2, 00:12:12.603 "num_base_bdevs_discovered": 2, 00:12:12.603 "num_base_bdevs_operational": 2, 00:12:12.603 "base_bdevs_list": [ 00:12:12.603 { 00:12:12.603 "name": "BaseBdev1", 00:12:12.603 "uuid": "2b3012d0-ec8a-5ef2-8d22-76a4ffa726c1", 00:12:12.603 "is_configured": true, 00:12:12.603 "data_offset": 2048, 00:12:12.603 "data_size": 63488 00:12:12.603 }, 00:12:12.603 { 00:12:12.603 "name": "BaseBdev2", 00:12:12.603 "uuid": "e51deadd-897b-5496-854d-e2bf6201453c", 00:12:12.603 "is_configured": true, 00:12:12.603 "data_offset": 2048, 00:12:12.603 "data_size": 63488 00:12:12.603 } 00:12:12.603 ] 00:12:12.603 }' 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.603 04:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.170 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:13.170 04:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:13.170 [2024-11-27 04:34:00.769067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.103 04:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.104 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.104 "name": "raid_bdev1", 00:12:14.104 "uuid": "c649f55f-bd67-4202-8ea4-d988ec32aed3", 00:12:14.104 "strip_size_kb": 64, 00:12:14.104 "state": "online", 00:12:14.104 "raid_level": "raid0", 00:12:14.104 "superblock": true, 00:12:14.104 "num_base_bdevs": 2, 00:12:14.104 "num_base_bdevs_discovered": 2, 00:12:14.104 "num_base_bdevs_operational": 2, 00:12:14.104 "base_bdevs_list": [ 00:12:14.104 { 00:12:14.104 "name": "BaseBdev1", 00:12:14.104 "uuid": "2b3012d0-ec8a-5ef2-8d22-76a4ffa726c1", 00:12:14.104 "is_configured": true, 00:12:14.104 "data_offset": 2048, 00:12:14.104 "data_size": 63488 00:12:14.104 }, 00:12:14.104 { 00:12:14.104 "name": "BaseBdev2", 00:12:14.104 "uuid": "e51deadd-897b-5496-854d-e2bf6201453c", 00:12:14.104 "is_configured": true, 00:12:14.104 "data_offset": 2048, 00:12:14.104 "data_size": 63488 00:12:14.104 } 00:12:14.104 ] 00:12:14.104 }' 00:12:14.104 04:34:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.104 04:34:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.682 [2024-11-27 04:34:02.188540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.682 [2024-11-27 04:34:02.188587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.682 [2024-11-27 04:34:02.192273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.682 [2024-11-27 04:34:02.192466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.682 [2024-11-27 04:34:02.192653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.682 [2024-11-27 04:34:02.192829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:14.682 { 00:12:14.682 "results": [ 00:12:14.682 { 00:12:14.682 "job": "raid_bdev1", 00:12:14.682 "core_mask": "0x1", 00:12:14.682 "workload": "randrw", 00:12:14.682 "percentage": 50, 00:12:14.682 "status": "finished", 00:12:14.682 "queue_depth": 1, 00:12:14.682 "io_size": 131072, 00:12:14.682 "runtime": 1.417224, 00:12:14.682 "iops": 10307.474330098841, 00:12:14.682 "mibps": 1288.4342912623551, 00:12:14.682 "io_failed": 1, 00:12:14.682 "io_timeout": 0, 00:12:14.682 "avg_latency_us": 134.26292683837485, 00:12:14.682 "min_latency_us": 43.985454545454544, 00:12:14.682 "max_latency_us": 1817.1345454545456 00:12:14.682 } 00:12:14.682 ], 00:12:14.682 "core_count": 1 00:12:14.682 } 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61642 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61642 ']' 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61642 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61642 00:12:14.682 killing process with pid 61642 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61642' 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61642 00:12:14.682 04:34:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61642 00:12:14.682 [2024-11-27 04:34:02.228637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.939 [2024-11-27 04:34:02.359297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.871 04:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tO7rrhxRkJ 00:12:15.871 04:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:15.871 04:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:15.871 04:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:15.871 04:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:15.871 04:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.871 04:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:15.871 ************************************ 00:12:15.871 END TEST raid_write_error_test 00:12:15.871 ************************************ 00:12:15.871 04:34:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:15.871 00:12:15.871 real 0m4.598s 00:12:15.871 user 0m5.813s 00:12:15.871 sys 0m0.514s 00:12:15.871 04:34:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.871 04:34:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.130 04:34:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:16.130 04:34:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:12:16.130 04:34:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:16.130 04:34:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.130 04:34:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.130 ************************************ 00:12:16.130 START TEST raid_state_function_test 00:12:16.130 ************************************ 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.130 Process raid pid: 61791 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61791 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61791' 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61791 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61791 ']' 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.130 04:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.130 [2024-11-27 04:34:03.635830] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:16.130 [2024-11-27 04:34:03.636216] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.388 [2024-11-27 04:34:03.809799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.388 [2024-11-27 04:34:03.944337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.645 [2024-11-27 04:34:04.156259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.645 [2024-11-27 04:34:04.156509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.211 [2024-11-27 04:34:04.671887] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.211 [2024-11-27 04:34:04.671950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.211 [2024-11-27 04:34:04.671968] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.211 [2024-11-27 04:34:04.671985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.211 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.212 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.212 04:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.212 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.212 04:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.212 04:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.212 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.212 "name": "Existed_Raid", 00:12:17.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.212 "strip_size_kb": 64, 00:12:17.212 "state": "configuring", 00:12:17.212 "raid_level": "concat", 00:12:17.212 "superblock": false, 00:12:17.212 "num_base_bdevs": 2, 00:12:17.212 "num_base_bdevs_discovered": 0, 00:12:17.212 "num_base_bdevs_operational": 2, 00:12:17.212 "base_bdevs_list": [ 00:12:17.212 { 00:12:17.212 "name": "BaseBdev1", 00:12:17.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.212 "is_configured": false, 00:12:17.212 "data_offset": 0, 00:12:17.212 "data_size": 0 00:12:17.212 }, 00:12:17.212 { 00:12:17.212 "name": "BaseBdev2", 00:12:17.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.212 "is_configured": false, 00:12:17.212 "data_offset": 0, 00:12:17.212 "data_size": 0 00:12:17.212 } 00:12:17.212 ] 00:12:17.212 }' 00:12:17.212 04:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.212 04:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.778 [2024-11-27 04:34:05.175951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.778 [2024-11-27 04:34:05.175995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.778 [2024-11-27 04:34:05.183923] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.778 [2024-11-27 04:34:05.183975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.778 [2024-11-27 04:34:05.183990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.778 [2024-11-27 04:34:05.184020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.778 [2024-11-27 04:34:05.229220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.778 BaseBdev1 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.778 [ 00:12:17.778 { 00:12:17.778 "name": "BaseBdev1", 00:12:17.778 "aliases": [ 00:12:17.778 "bcadcaa8-1c7d-48dd-a94d-fd7b71f784d3" 00:12:17.778 ], 00:12:17.778 "product_name": "Malloc disk", 00:12:17.778 "block_size": 512, 00:12:17.778 "num_blocks": 65536, 00:12:17.778 "uuid": "bcadcaa8-1c7d-48dd-a94d-fd7b71f784d3", 00:12:17.778 "assigned_rate_limits": { 00:12:17.778 "rw_ios_per_sec": 0, 00:12:17.778 "rw_mbytes_per_sec": 0, 00:12:17.778 "r_mbytes_per_sec": 0, 00:12:17.778 "w_mbytes_per_sec": 0 00:12:17.778 }, 00:12:17.778 "claimed": true, 00:12:17.778 "claim_type": "exclusive_write", 00:12:17.778 "zoned": false, 00:12:17.778 "supported_io_types": { 00:12:17.778 "read": true, 00:12:17.778 "write": true, 00:12:17.778 "unmap": true, 00:12:17.778 "flush": true, 00:12:17.778 "reset": true, 00:12:17.778 "nvme_admin": false, 00:12:17.778 "nvme_io": false, 00:12:17.778 "nvme_io_md": false, 00:12:17.778 "write_zeroes": true, 00:12:17.778 "zcopy": true, 00:12:17.778 "get_zone_info": false, 00:12:17.778 "zone_management": false, 00:12:17.778 "zone_append": false, 00:12:17.778 "compare": false, 00:12:17.778 "compare_and_write": false, 00:12:17.778 "abort": true, 00:12:17.778 "seek_hole": false, 00:12:17.778 "seek_data": false, 00:12:17.778 "copy": true, 00:12:17.778 "nvme_iov_md": false 00:12:17.778 }, 00:12:17.778 "memory_domains": [ 00:12:17.778 { 00:12:17.778 "dma_device_id": "system", 00:12:17.778 "dma_device_type": 1 00:12:17.778 }, 00:12:17.778 { 00:12:17.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.778 "dma_device_type": 2 00:12:17.778 } 00:12:17.778 ], 00:12:17.778 "driver_specific": {} 00:12:17.778 } 00:12:17.778 ] 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.778 "name": "Existed_Raid", 00:12:17.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.778 "strip_size_kb": 64, 00:12:17.778 "state": "configuring", 00:12:17.778 "raid_level": "concat", 00:12:17.778 "superblock": false, 00:12:17.778 "num_base_bdevs": 2, 00:12:17.778 "num_base_bdevs_discovered": 1, 00:12:17.778 "num_base_bdevs_operational": 2, 00:12:17.778 "base_bdevs_list": [ 00:12:17.778 { 00:12:17.778 "name": "BaseBdev1", 00:12:17.778 "uuid": "bcadcaa8-1c7d-48dd-a94d-fd7b71f784d3", 00:12:17.778 "is_configured": true, 00:12:17.778 "data_offset": 0, 00:12:17.778 "data_size": 65536 00:12:17.778 }, 00:12:17.778 { 00:12:17.778 "name": "BaseBdev2", 00:12:17.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.778 "is_configured": false, 00:12:17.778 "data_offset": 0, 00:12:17.778 "data_size": 0 00:12:17.778 } 00:12:17.778 ] 00:12:17.778 }' 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.778 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.344 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:18.344 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.344 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.344 [2024-11-27 04:34:05.773429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:18.344 [2024-11-27 04:34:05.773488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:18.344 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.345 [2024-11-27 04:34:05.785464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.345 [2024-11-27 04:34:05.787961] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:18.345 [2024-11-27 04:34:05.788128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.345 "name": "Existed_Raid", 00:12:18.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.345 "strip_size_kb": 64, 00:12:18.345 "state": "configuring", 00:12:18.345 "raid_level": "concat", 00:12:18.345 "superblock": false, 00:12:18.345 "num_base_bdevs": 2, 00:12:18.345 "num_base_bdevs_discovered": 1, 00:12:18.345 "num_base_bdevs_operational": 2, 00:12:18.345 "base_bdevs_list": [ 00:12:18.345 { 00:12:18.345 "name": "BaseBdev1", 00:12:18.345 "uuid": "bcadcaa8-1c7d-48dd-a94d-fd7b71f784d3", 00:12:18.345 "is_configured": true, 00:12:18.345 "data_offset": 0, 00:12:18.345 "data_size": 65536 00:12:18.345 }, 00:12:18.345 { 00:12:18.345 "name": "BaseBdev2", 00:12:18.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.345 "is_configured": false, 00:12:18.345 "data_offset": 0, 00:12:18.345 "data_size": 0 00:12:18.345 } 00:12:18.345 ] 00:12:18.345 }' 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.345 04:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 [2024-11-27 04:34:06.331334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.912 [2024-11-27 04:34:06.331655] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:18.912 [2024-11-27 04:34:06.331685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:18.912 [2024-11-27 04:34:06.332143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:18.912 [2024-11-27 04:34:06.332423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:18.912 [2024-11-27 04:34:06.332450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:18.912 [2024-11-27 04:34:06.332866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.912 BaseBdev2 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 [ 00:12:18.912 { 00:12:18.912 "name": "BaseBdev2", 00:12:18.912 "aliases": [ 00:12:18.912 "ead15727-7c61-485f-a7ec-350918d907e7" 00:12:18.912 ], 00:12:18.912 "product_name": "Malloc disk", 00:12:18.912 "block_size": 512, 00:12:18.912 "num_blocks": 65536, 00:12:18.912 "uuid": "ead15727-7c61-485f-a7ec-350918d907e7", 00:12:18.912 "assigned_rate_limits": { 00:12:18.912 "rw_ios_per_sec": 0, 00:12:18.912 "rw_mbytes_per_sec": 0, 00:12:18.912 "r_mbytes_per_sec": 0, 00:12:18.912 "w_mbytes_per_sec": 0 00:12:18.912 }, 00:12:18.912 "claimed": true, 00:12:18.912 "claim_type": "exclusive_write", 00:12:18.912 "zoned": false, 00:12:18.912 "supported_io_types": { 00:12:18.912 "read": true, 00:12:18.912 "write": true, 00:12:18.912 "unmap": true, 00:12:18.912 "flush": true, 00:12:18.912 "reset": true, 00:12:18.912 "nvme_admin": false, 00:12:18.912 "nvme_io": false, 00:12:18.912 "nvme_io_md": false, 00:12:18.912 "write_zeroes": true, 00:12:18.912 "zcopy": true, 00:12:18.912 "get_zone_info": false, 00:12:18.912 "zone_management": false, 00:12:18.912 "zone_append": false, 00:12:18.912 "compare": false, 00:12:18.912 "compare_and_write": false, 00:12:18.912 "abort": true, 00:12:18.912 "seek_hole": false, 00:12:18.912 "seek_data": false, 00:12:18.912 "copy": true, 00:12:18.912 "nvme_iov_md": false 00:12:18.912 }, 00:12:18.912 "memory_domains": [ 00:12:18.912 { 00:12:18.912 "dma_device_id": "system", 00:12:18.912 "dma_device_type": 1 00:12:18.912 }, 00:12:18.912 { 00:12:18.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.912 "dma_device_type": 2 00:12:18.912 } 00:12:18.912 ], 00:12:18.912 "driver_specific": {} 00:12:18.912 } 00:12:18.912 ] 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.912 "name": "Existed_Raid", 00:12:18.912 "uuid": "17aa3d1d-757d-42cb-a469-548637c9a129", 00:12:18.912 "strip_size_kb": 64, 00:12:18.912 "state": "online", 00:12:18.912 "raid_level": "concat", 00:12:18.912 "superblock": false, 00:12:18.912 "num_base_bdevs": 2, 00:12:18.912 "num_base_bdevs_discovered": 2, 00:12:18.912 "num_base_bdevs_operational": 2, 00:12:18.912 "base_bdevs_list": [ 00:12:18.912 { 00:12:18.912 "name": "BaseBdev1", 00:12:18.912 "uuid": "bcadcaa8-1c7d-48dd-a94d-fd7b71f784d3", 00:12:18.912 "is_configured": true, 00:12:18.912 "data_offset": 0, 00:12:18.912 "data_size": 65536 00:12:18.912 }, 00:12:18.912 { 00:12:18.912 "name": "BaseBdev2", 00:12:18.912 "uuid": "ead15727-7c61-485f-a7ec-350918d907e7", 00:12:18.912 "is_configured": true, 00:12:18.912 "data_offset": 0, 00:12:18.912 "data_size": 65536 00:12:18.912 } 00:12:18.912 ] 00:12:18.912 }' 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.912 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.479 [2024-11-27 04:34:06.883908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.479 "name": "Existed_Raid", 00:12:19.479 "aliases": [ 00:12:19.479 "17aa3d1d-757d-42cb-a469-548637c9a129" 00:12:19.479 ], 00:12:19.479 "product_name": "Raid Volume", 00:12:19.479 "block_size": 512, 00:12:19.479 "num_blocks": 131072, 00:12:19.479 "uuid": "17aa3d1d-757d-42cb-a469-548637c9a129", 00:12:19.479 "assigned_rate_limits": { 00:12:19.479 "rw_ios_per_sec": 0, 00:12:19.479 "rw_mbytes_per_sec": 0, 00:12:19.479 "r_mbytes_per_sec": 0, 00:12:19.479 "w_mbytes_per_sec": 0 00:12:19.479 }, 00:12:19.479 "claimed": false, 00:12:19.479 "zoned": false, 00:12:19.479 "supported_io_types": { 00:12:19.479 "read": true, 00:12:19.479 "write": true, 00:12:19.479 "unmap": true, 00:12:19.479 "flush": true, 00:12:19.479 "reset": true, 00:12:19.479 "nvme_admin": false, 00:12:19.479 "nvme_io": false, 00:12:19.479 "nvme_io_md": false, 00:12:19.479 "write_zeroes": true, 00:12:19.479 "zcopy": false, 00:12:19.479 "get_zone_info": false, 00:12:19.479 "zone_management": false, 00:12:19.479 "zone_append": false, 00:12:19.479 "compare": false, 00:12:19.479 "compare_and_write": false, 00:12:19.479 "abort": false, 00:12:19.479 "seek_hole": false, 00:12:19.479 "seek_data": false, 00:12:19.479 "copy": false, 00:12:19.479 "nvme_iov_md": false 00:12:19.479 }, 00:12:19.479 "memory_domains": [ 00:12:19.479 { 00:12:19.479 "dma_device_id": "system", 00:12:19.479 "dma_device_type": 1 00:12:19.479 }, 00:12:19.479 { 00:12:19.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.479 "dma_device_type": 2 00:12:19.479 }, 00:12:19.479 { 00:12:19.479 "dma_device_id": "system", 00:12:19.479 "dma_device_type": 1 00:12:19.479 }, 00:12:19.479 { 00:12:19.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.479 "dma_device_type": 2 00:12:19.479 } 00:12:19.479 ], 00:12:19.479 "driver_specific": { 00:12:19.479 "raid": { 00:12:19.479 "uuid": "17aa3d1d-757d-42cb-a469-548637c9a129", 00:12:19.479 "strip_size_kb": 64, 00:12:19.479 "state": "online", 00:12:19.479 "raid_level": "concat", 00:12:19.479 "superblock": false, 00:12:19.479 "num_base_bdevs": 2, 00:12:19.479 "num_base_bdevs_discovered": 2, 00:12:19.479 "num_base_bdevs_operational": 2, 00:12:19.479 "base_bdevs_list": [ 00:12:19.479 { 00:12:19.479 "name": "BaseBdev1", 00:12:19.479 "uuid": "bcadcaa8-1c7d-48dd-a94d-fd7b71f784d3", 00:12:19.479 "is_configured": true, 00:12:19.479 "data_offset": 0, 00:12:19.479 "data_size": 65536 00:12:19.479 }, 00:12:19.479 { 00:12:19.479 "name": "BaseBdev2", 00:12:19.479 "uuid": "ead15727-7c61-485f-a7ec-350918d907e7", 00:12:19.479 "is_configured": true, 00:12:19.479 "data_offset": 0, 00:12:19.479 "data_size": 65536 00:12:19.479 } 00:12:19.479 ] 00:12:19.479 } 00:12:19.479 } 00:12:19.479 }' 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:19.479 BaseBdev2' 00:12:19.479 04:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.479 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:19.479 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.479 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:19.479 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.479 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.479 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.479 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.479 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.479 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.479 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.480 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:19.480 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.480 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.480 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.738 [2024-11-27 04:34:07.143621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:19.738 [2024-11-27 04:34:07.143666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.738 [2024-11-27 04:34:07.143734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.738 "name": "Existed_Raid", 00:12:19.738 "uuid": "17aa3d1d-757d-42cb-a469-548637c9a129", 00:12:19.738 "strip_size_kb": 64, 00:12:19.738 "state": "offline", 00:12:19.738 "raid_level": "concat", 00:12:19.738 "superblock": false, 00:12:19.738 "num_base_bdevs": 2, 00:12:19.738 "num_base_bdevs_discovered": 1, 00:12:19.738 "num_base_bdevs_operational": 1, 00:12:19.738 "base_bdevs_list": [ 00:12:19.738 { 00:12:19.738 "name": null, 00:12:19.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.738 "is_configured": false, 00:12:19.738 "data_offset": 0, 00:12:19.738 "data_size": 65536 00:12:19.738 }, 00:12:19.738 { 00:12:19.738 "name": "BaseBdev2", 00:12:19.738 "uuid": "ead15727-7c61-485f-a7ec-350918d907e7", 00:12:19.738 "is_configured": true, 00:12:19.738 "data_offset": 0, 00:12:19.738 "data_size": 65536 00:12:19.738 } 00:12:19.738 ] 00:12:19.738 }' 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.738 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.305 [2024-11-27 04:34:07.776481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:20.305 [2024-11-27 04:34:07.776550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61791 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61791 ']' 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61791 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.305 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61791 00:12:20.562 killing process with pid 61791 00:12:20.562 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.562 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.562 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61791' 00:12:20.562 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61791 00:12:20.562 [2024-11-27 04:34:07.952271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.562 04:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61791 00:12:20.562 [2024-11-27 04:34:07.966921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:21.498 00:12:21.498 real 0m5.501s 00:12:21.498 user 0m8.301s 00:12:21.498 sys 0m0.746s 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.498 ************************************ 00:12:21.498 END TEST raid_state_function_test 00:12:21.498 ************************************ 00:12:21.498 04:34:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:12:21.498 04:34:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:21.498 04:34:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.498 04:34:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.498 ************************************ 00:12:21.498 START TEST raid_state_function_test_sb 00:12:21.498 ************************************ 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:21.498 Process raid pid: 62044 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62044 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62044' 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62044 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62044 ']' 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.498 04:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 [2024-11-27 04:34:09.185161] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:21.764 [2024-11-27 04:34:09.185319] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.764 [2024-11-27 04:34:09.368551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.033 [2024-11-27 04:34:09.552597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.292 [2024-11-27 04:34:09.773342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.292 [2024-11-27 04:34:09.773386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.860 [2024-11-27 04:34:10.197467] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.860 [2024-11-27 04:34:10.197535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.860 [2024-11-27 04:34:10.197552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.860 [2024-11-27 04:34:10.197569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.860 "name": "Existed_Raid", 00:12:22.860 "uuid": "d6aced67-ef13-40a7-8b8f-753a4e7b30d9", 00:12:22.860 "strip_size_kb": 64, 00:12:22.860 "state": "configuring", 00:12:22.860 "raid_level": "concat", 00:12:22.860 "superblock": true, 00:12:22.860 "num_base_bdevs": 2, 00:12:22.860 "num_base_bdevs_discovered": 0, 00:12:22.860 "num_base_bdevs_operational": 2, 00:12:22.860 "base_bdevs_list": [ 00:12:22.860 { 00:12:22.860 "name": "BaseBdev1", 00:12:22.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.860 "is_configured": false, 00:12:22.860 "data_offset": 0, 00:12:22.860 "data_size": 0 00:12:22.860 }, 00:12:22.860 { 00:12:22.860 "name": "BaseBdev2", 00:12:22.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.860 "is_configured": false, 00:12:22.860 "data_offset": 0, 00:12:22.860 "data_size": 0 00:12:22.860 } 00:12:22.860 ] 00:12:22.860 }' 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.860 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.118 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.118 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.118 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.118 [2024-11-27 04:34:10.733524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.118 [2024-11-27 04:34:10.733572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:23.118 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.118 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:23.118 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.118 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.377 [2024-11-27 04:34:10.741525] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:23.377 [2024-11-27 04:34:10.741581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:23.377 [2024-11-27 04:34:10.741597] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.377 [2024-11-27 04:34:10.741616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.377 [2024-11-27 04:34:10.786304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.377 BaseBdev1 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.377 [ 00:12:23.377 { 00:12:23.377 "name": "BaseBdev1", 00:12:23.377 "aliases": [ 00:12:23.377 "b2391541-852b-4887-82eb-59d824ba6a1b" 00:12:23.377 ], 00:12:23.377 "product_name": "Malloc disk", 00:12:23.377 "block_size": 512, 00:12:23.377 "num_blocks": 65536, 00:12:23.377 "uuid": "b2391541-852b-4887-82eb-59d824ba6a1b", 00:12:23.377 "assigned_rate_limits": { 00:12:23.377 "rw_ios_per_sec": 0, 00:12:23.377 "rw_mbytes_per_sec": 0, 00:12:23.377 "r_mbytes_per_sec": 0, 00:12:23.377 "w_mbytes_per_sec": 0 00:12:23.377 }, 00:12:23.377 "claimed": true, 00:12:23.377 "claim_type": "exclusive_write", 00:12:23.377 "zoned": false, 00:12:23.377 "supported_io_types": { 00:12:23.377 "read": true, 00:12:23.377 "write": true, 00:12:23.377 "unmap": true, 00:12:23.377 "flush": true, 00:12:23.377 "reset": true, 00:12:23.377 "nvme_admin": false, 00:12:23.377 "nvme_io": false, 00:12:23.377 "nvme_io_md": false, 00:12:23.377 "write_zeroes": true, 00:12:23.377 "zcopy": true, 00:12:23.377 "get_zone_info": false, 00:12:23.377 "zone_management": false, 00:12:23.377 "zone_append": false, 00:12:23.377 "compare": false, 00:12:23.377 "compare_and_write": false, 00:12:23.377 "abort": true, 00:12:23.377 "seek_hole": false, 00:12:23.377 "seek_data": false, 00:12:23.377 "copy": true, 00:12:23.377 "nvme_iov_md": false 00:12:23.377 }, 00:12:23.377 "memory_domains": [ 00:12:23.377 { 00:12:23.377 "dma_device_id": "system", 00:12:23.377 "dma_device_type": 1 00:12:23.377 }, 00:12:23.377 { 00:12:23.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.377 "dma_device_type": 2 00:12:23.377 } 00:12:23.377 ], 00:12:23.377 "driver_specific": {} 00:12:23.377 } 00:12:23.377 ] 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.377 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.378 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.378 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.378 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.378 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.378 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.378 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.378 "name": "Existed_Raid", 00:12:23.378 "uuid": "1fa82e28-2833-4c19-a213-87b9522e0b33", 00:12:23.378 "strip_size_kb": 64, 00:12:23.378 "state": "configuring", 00:12:23.378 "raid_level": "concat", 00:12:23.378 "superblock": true, 00:12:23.378 "num_base_bdevs": 2, 00:12:23.378 "num_base_bdevs_discovered": 1, 00:12:23.378 "num_base_bdevs_operational": 2, 00:12:23.378 "base_bdevs_list": [ 00:12:23.378 { 00:12:23.378 "name": "BaseBdev1", 00:12:23.378 "uuid": "b2391541-852b-4887-82eb-59d824ba6a1b", 00:12:23.378 "is_configured": true, 00:12:23.378 "data_offset": 2048, 00:12:23.378 "data_size": 63488 00:12:23.378 }, 00:12:23.378 { 00:12:23.378 "name": "BaseBdev2", 00:12:23.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.378 "is_configured": false, 00:12:23.378 "data_offset": 0, 00:12:23.378 "data_size": 0 00:12:23.378 } 00:12:23.378 ] 00:12:23.378 }' 00:12:23.378 04:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.378 04:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.943 [2024-11-27 04:34:11.282492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.943 [2024-11-27 04:34:11.282693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.943 [2024-11-27 04:34:11.290566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.943 [2024-11-27 04:34:11.293218] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.943 [2024-11-27 04:34:11.293401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.943 "name": "Existed_Raid", 00:12:23.943 "uuid": "caa2de88-1886-4efe-ad06-49a5cb0cb93d", 00:12:23.943 "strip_size_kb": 64, 00:12:23.943 "state": "configuring", 00:12:23.943 "raid_level": "concat", 00:12:23.943 "superblock": true, 00:12:23.943 "num_base_bdevs": 2, 00:12:23.943 "num_base_bdevs_discovered": 1, 00:12:23.943 "num_base_bdevs_operational": 2, 00:12:23.943 "base_bdevs_list": [ 00:12:23.943 { 00:12:23.943 "name": "BaseBdev1", 00:12:23.943 "uuid": "b2391541-852b-4887-82eb-59d824ba6a1b", 00:12:23.943 "is_configured": true, 00:12:23.943 "data_offset": 2048, 00:12:23.943 "data_size": 63488 00:12:23.943 }, 00:12:23.943 { 00:12:23.943 "name": "BaseBdev2", 00:12:23.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.943 "is_configured": false, 00:12:23.943 "data_offset": 0, 00:12:23.943 "data_size": 0 00:12:23.943 } 00:12:23.943 ] 00:12:23.943 }' 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.943 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.201 [2024-11-27 04:34:11.812987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.201 [2024-11-27 04:34:11.813314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:24.201 [2024-11-27 04:34:11.813335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:24.201 BaseBdev2 00:12:24.201 [2024-11-27 04:34:11.813746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:24.201 [2024-11-27 04:34:11.813974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:24.201 [2024-11-27 04:34:11.814007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:24.201 [2024-11-27 04:34:11.814188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.201 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.202 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.202 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.460 [ 00:12:24.460 { 00:12:24.460 "name": "BaseBdev2", 00:12:24.460 "aliases": [ 00:12:24.460 "a39d8a08-b530-474d-95d0-eb72748badc1" 00:12:24.460 ], 00:12:24.460 "product_name": "Malloc disk", 00:12:24.460 "block_size": 512, 00:12:24.460 "num_blocks": 65536, 00:12:24.460 "uuid": "a39d8a08-b530-474d-95d0-eb72748badc1", 00:12:24.460 "assigned_rate_limits": { 00:12:24.460 "rw_ios_per_sec": 0, 00:12:24.460 "rw_mbytes_per_sec": 0, 00:12:24.460 "r_mbytes_per_sec": 0, 00:12:24.460 "w_mbytes_per_sec": 0 00:12:24.460 }, 00:12:24.460 "claimed": true, 00:12:24.460 "claim_type": "exclusive_write", 00:12:24.460 "zoned": false, 00:12:24.460 "supported_io_types": { 00:12:24.460 "read": true, 00:12:24.460 "write": true, 00:12:24.460 "unmap": true, 00:12:24.460 "flush": true, 00:12:24.460 "reset": true, 00:12:24.460 "nvme_admin": false, 00:12:24.460 "nvme_io": false, 00:12:24.460 "nvme_io_md": false, 00:12:24.460 "write_zeroes": true, 00:12:24.460 "zcopy": true, 00:12:24.460 "get_zone_info": false, 00:12:24.460 "zone_management": false, 00:12:24.460 "zone_append": false, 00:12:24.460 "compare": false, 00:12:24.460 "compare_and_write": false, 00:12:24.460 "abort": true, 00:12:24.460 "seek_hole": false, 00:12:24.460 "seek_data": false, 00:12:24.460 "copy": true, 00:12:24.460 "nvme_iov_md": false 00:12:24.460 }, 00:12:24.460 "memory_domains": [ 00:12:24.460 { 00:12:24.460 "dma_device_id": "system", 00:12:24.460 "dma_device_type": 1 00:12:24.460 }, 00:12:24.460 { 00:12:24.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.460 "dma_device_type": 2 00:12:24.460 } 00:12:24.460 ], 00:12:24.460 "driver_specific": {} 00:12:24.460 } 00:12:24.460 ] 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.460 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.460 "name": "Existed_Raid", 00:12:24.460 "uuid": "caa2de88-1886-4efe-ad06-49a5cb0cb93d", 00:12:24.461 "strip_size_kb": 64, 00:12:24.461 "state": "online", 00:12:24.461 "raid_level": "concat", 00:12:24.461 "superblock": true, 00:12:24.461 "num_base_bdevs": 2, 00:12:24.461 "num_base_bdevs_discovered": 2, 00:12:24.461 "num_base_bdevs_operational": 2, 00:12:24.461 "base_bdevs_list": [ 00:12:24.461 { 00:12:24.461 "name": "BaseBdev1", 00:12:24.461 "uuid": "b2391541-852b-4887-82eb-59d824ba6a1b", 00:12:24.461 "is_configured": true, 00:12:24.461 "data_offset": 2048, 00:12:24.461 "data_size": 63488 00:12:24.461 }, 00:12:24.461 { 00:12:24.461 "name": "BaseBdev2", 00:12:24.461 "uuid": "a39d8a08-b530-474d-95d0-eb72748badc1", 00:12:24.461 "is_configured": true, 00:12:24.461 "data_offset": 2048, 00:12:24.461 "data_size": 63488 00:12:24.461 } 00:12:24.461 ] 00:12:24.461 }' 00:12:24.461 04:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.461 04:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.026 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:25.026 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:25.026 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.026 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.026 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.026 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.027 [2024-11-27 04:34:12.357525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.027 "name": "Existed_Raid", 00:12:25.027 "aliases": [ 00:12:25.027 "caa2de88-1886-4efe-ad06-49a5cb0cb93d" 00:12:25.027 ], 00:12:25.027 "product_name": "Raid Volume", 00:12:25.027 "block_size": 512, 00:12:25.027 "num_blocks": 126976, 00:12:25.027 "uuid": "caa2de88-1886-4efe-ad06-49a5cb0cb93d", 00:12:25.027 "assigned_rate_limits": { 00:12:25.027 "rw_ios_per_sec": 0, 00:12:25.027 "rw_mbytes_per_sec": 0, 00:12:25.027 "r_mbytes_per_sec": 0, 00:12:25.027 "w_mbytes_per_sec": 0 00:12:25.027 }, 00:12:25.027 "claimed": false, 00:12:25.027 "zoned": false, 00:12:25.027 "supported_io_types": { 00:12:25.027 "read": true, 00:12:25.027 "write": true, 00:12:25.027 "unmap": true, 00:12:25.027 "flush": true, 00:12:25.027 "reset": true, 00:12:25.027 "nvme_admin": false, 00:12:25.027 "nvme_io": false, 00:12:25.027 "nvme_io_md": false, 00:12:25.027 "write_zeroes": true, 00:12:25.027 "zcopy": false, 00:12:25.027 "get_zone_info": false, 00:12:25.027 "zone_management": false, 00:12:25.027 "zone_append": false, 00:12:25.027 "compare": false, 00:12:25.027 "compare_and_write": false, 00:12:25.027 "abort": false, 00:12:25.027 "seek_hole": false, 00:12:25.027 "seek_data": false, 00:12:25.027 "copy": false, 00:12:25.027 "nvme_iov_md": false 00:12:25.027 }, 00:12:25.027 "memory_domains": [ 00:12:25.027 { 00:12:25.027 "dma_device_id": "system", 00:12:25.027 "dma_device_type": 1 00:12:25.027 }, 00:12:25.027 { 00:12:25.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.027 "dma_device_type": 2 00:12:25.027 }, 00:12:25.027 { 00:12:25.027 "dma_device_id": "system", 00:12:25.027 "dma_device_type": 1 00:12:25.027 }, 00:12:25.027 { 00:12:25.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.027 "dma_device_type": 2 00:12:25.027 } 00:12:25.027 ], 00:12:25.027 "driver_specific": { 00:12:25.027 "raid": { 00:12:25.027 "uuid": "caa2de88-1886-4efe-ad06-49a5cb0cb93d", 00:12:25.027 "strip_size_kb": 64, 00:12:25.027 "state": "online", 00:12:25.027 "raid_level": "concat", 00:12:25.027 "superblock": true, 00:12:25.027 "num_base_bdevs": 2, 00:12:25.027 "num_base_bdevs_discovered": 2, 00:12:25.027 "num_base_bdevs_operational": 2, 00:12:25.027 "base_bdevs_list": [ 00:12:25.027 { 00:12:25.027 "name": "BaseBdev1", 00:12:25.027 "uuid": "b2391541-852b-4887-82eb-59d824ba6a1b", 00:12:25.027 "is_configured": true, 00:12:25.027 "data_offset": 2048, 00:12:25.027 "data_size": 63488 00:12:25.027 }, 00:12:25.027 { 00:12:25.027 "name": "BaseBdev2", 00:12:25.027 "uuid": "a39d8a08-b530-474d-95d0-eb72748badc1", 00:12:25.027 "is_configured": true, 00:12:25.027 "data_offset": 2048, 00:12:25.027 "data_size": 63488 00:12:25.027 } 00:12:25.027 ] 00:12:25.027 } 00:12:25.027 } 00:12:25.027 }' 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:25.027 BaseBdev2' 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.027 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.027 [2024-11-27 04:34:12.633333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.027 [2024-11-27 04:34:12.633377] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.027 [2024-11-27 04:34:12.633443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.285 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.286 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.286 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.286 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.286 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.286 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.286 "name": "Existed_Raid", 00:12:25.286 "uuid": "caa2de88-1886-4efe-ad06-49a5cb0cb93d", 00:12:25.286 "strip_size_kb": 64, 00:12:25.286 "state": "offline", 00:12:25.286 "raid_level": "concat", 00:12:25.286 "superblock": true, 00:12:25.286 "num_base_bdevs": 2, 00:12:25.286 "num_base_bdevs_discovered": 1, 00:12:25.286 "num_base_bdevs_operational": 1, 00:12:25.286 "base_bdevs_list": [ 00:12:25.286 { 00:12:25.286 "name": null, 00:12:25.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.286 "is_configured": false, 00:12:25.286 "data_offset": 0, 00:12:25.286 "data_size": 63488 00:12:25.286 }, 00:12:25.286 { 00:12:25.286 "name": "BaseBdev2", 00:12:25.286 "uuid": "a39d8a08-b530-474d-95d0-eb72748badc1", 00:12:25.286 "is_configured": true, 00:12:25.286 "data_offset": 2048, 00:12:25.286 "data_size": 63488 00:12:25.286 } 00:12:25.286 ] 00:12:25.286 }' 00:12:25.286 04:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.286 04:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.852 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.853 [2024-11-27 04:34:13.276000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:25.853 [2024-11-27 04:34:13.276073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62044 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62044 ']' 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62044 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62044 00:12:25.853 killing process with pid 62044 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62044' 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62044 00:12:25.853 [2024-11-27 04:34:13.441412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.853 04:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62044 00:12:25.853 [2024-11-27 04:34:13.456057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.289 04:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:27.289 00:12:27.289 real 0m5.519s 00:12:27.289 user 0m8.308s 00:12:27.289 sys 0m0.706s 00:12:27.289 04:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.289 ************************************ 00:12:27.289 END TEST raid_state_function_test_sb 00:12:27.289 04:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.289 ************************************ 00:12:27.289 04:34:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:12:27.289 04:34:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:27.290 04:34:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.290 04:34:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.290 ************************************ 00:12:27.290 START TEST raid_superblock_test 00:12:27.290 ************************************ 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62302 00:12:27.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62302 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62302 ']' 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.290 04:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.290 [2024-11-27 04:34:14.786459] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:27.290 [2024-11-27 04:34:14.786645] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62302 ] 00:12:27.546 [2024-11-27 04:34:14.993232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.546 [2024-11-27 04:34:15.148186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.803 [2024-11-27 04:34:15.358269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.803 [2024-11-27 04:34:15.358720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.367 malloc1 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.367 [2024-11-27 04:34:15.748244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:28.367 [2024-11-27 04:34:15.748322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.367 [2024-11-27 04:34:15.748357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:28.367 [2024-11-27 04:34:15.748374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.367 [2024-11-27 04:34:15.751276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.367 [2024-11-27 04:34:15.751325] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:28.367 pt1 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.367 malloc2 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.367 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.367 [2024-11-27 04:34:15.800046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.368 [2024-11-27 04:34:15.800119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.368 [2024-11-27 04:34:15.800157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:28.368 [2024-11-27 04:34:15.800171] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.368 [2024-11-27 04:34:15.803030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.368 [2024-11-27 04:34:15.803076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.368 pt2 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.368 [2024-11-27 04:34:15.808121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:28.368 [2024-11-27 04:34:15.810533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.368 [2024-11-27 04:34:15.810745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:28.368 [2024-11-27 04:34:15.810765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:28.368 [2024-11-27 04:34:15.811152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:28.368 [2024-11-27 04:34:15.811361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:28.368 [2024-11-27 04:34:15.811382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:28.368 [2024-11-27 04:34:15.811586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.368 "name": "raid_bdev1", 00:12:28.368 "uuid": "ebe7ead3-0bfb-4c06-9418-c2496e0206bc", 00:12:28.368 "strip_size_kb": 64, 00:12:28.368 "state": "online", 00:12:28.368 "raid_level": "concat", 00:12:28.368 "superblock": true, 00:12:28.368 "num_base_bdevs": 2, 00:12:28.368 "num_base_bdevs_discovered": 2, 00:12:28.368 "num_base_bdevs_operational": 2, 00:12:28.368 "base_bdevs_list": [ 00:12:28.368 { 00:12:28.368 "name": "pt1", 00:12:28.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.368 "is_configured": true, 00:12:28.368 "data_offset": 2048, 00:12:28.368 "data_size": 63488 00:12:28.368 }, 00:12:28.368 { 00:12:28.368 "name": "pt2", 00:12:28.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.368 "is_configured": true, 00:12:28.368 "data_offset": 2048, 00:12:28.368 "data_size": 63488 00:12:28.368 } 00:12:28.368 ] 00:12:28.368 }' 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.368 04:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.931 [2024-11-27 04:34:16.360552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.931 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.931 "name": "raid_bdev1", 00:12:28.931 "aliases": [ 00:12:28.931 "ebe7ead3-0bfb-4c06-9418-c2496e0206bc" 00:12:28.931 ], 00:12:28.931 "product_name": "Raid Volume", 00:12:28.931 "block_size": 512, 00:12:28.931 "num_blocks": 126976, 00:12:28.931 "uuid": "ebe7ead3-0bfb-4c06-9418-c2496e0206bc", 00:12:28.931 "assigned_rate_limits": { 00:12:28.931 "rw_ios_per_sec": 0, 00:12:28.931 "rw_mbytes_per_sec": 0, 00:12:28.931 "r_mbytes_per_sec": 0, 00:12:28.931 "w_mbytes_per_sec": 0 00:12:28.931 }, 00:12:28.931 "claimed": false, 00:12:28.931 "zoned": false, 00:12:28.931 "supported_io_types": { 00:12:28.931 "read": true, 00:12:28.931 "write": true, 00:12:28.931 "unmap": true, 00:12:28.931 "flush": true, 00:12:28.931 "reset": true, 00:12:28.931 "nvme_admin": false, 00:12:28.931 "nvme_io": false, 00:12:28.931 "nvme_io_md": false, 00:12:28.931 "write_zeroes": true, 00:12:28.931 "zcopy": false, 00:12:28.931 "get_zone_info": false, 00:12:28.931 "zone_management": false, 00:12:28.931 "zone_append": false, 00:12:28.931 "compare": false, 00:12:28.931 "compare_and_write": false, 00:12:28.931 "abort": false, 00:12:28.931 "seek_hole": false, 00:12:28.931 "seek_data": false, 00:12:28.931 "copy": false, 00:12:28.931 "nvme_iov_md": false 00:12:28.931 }, 00:12:28.931 "memory_domains": [ 00:12:28.931 { 00:12:28.931 "dma_device_id": "system", 00:12:28.931 "dma_device_type": 1 00:12:28.931 }, 00:12:28.931 { 00:12:28.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.931 "dma_device_type": 2 00:12:28.931 }, 00:12:28.931 { 00:12:28.931 "dma_device_id": "system", 00:12:28.931 "dma_device_type": 1 00:12:28.931 }, 00:12:28.931 { 00:12:28.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.931 "dma_device_type": 2 00:12:28.931 } 00:12:28.931 ], 00:12:28.931 "driver_specific": { 00:12:28.931 "raid": { 00:12:28.931 "uuid": "ebe7ead3-0bfb-4c06-9418-c2496e0206bc", 00:12:28.931 "strip_size_kb": 64, 00:12:28.931 "state": "online", 00:12:28.931 "raid_level": "concat", 00:12:28.931 "superblock": true, 00:12:28.931 "num_base_bdevs": 2, 00:12:28.931 "num_base_bdevs_discovered": 2, 00:12:28.931 "num_base_bdevs_operational": 2, 00:12:28.931 "base_bdevs_list": [ 00:12:28.931 { 00:12:28.931 "name": "pt1", 00:12:28.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.932 "is_configured": true, 00:12:28.932 "data_offset": 2048, 00:12:28.932 "data_size": 63488 00:12:28.932 }, 00:12:28.932 { 00:12:28.932 "name": "pt2", 00:12:28.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.932 "is_configured": true, 00:12:28.932 "data_offset": 2048, 00:12:28.932 "data_size": 63488 00:12:28.932 } 00:12:28.932 ] 00:12:28.932 } 00:12:28.932 } 00:12:28.932 }' 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:28.932 pt2' 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.932 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.189 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 [2024-11-27 04:34:16.608609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ebe7ead3-0bfb-4c06-9418-c2496e0206bc 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ebe7ead3-0bfb-4c06-9418-c2496e0206bc ']' 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 [2024-11-27 04:34:16.660251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.190 [2024-11-27 04:34:16.660398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.190 [2024-11-27 04:34:16.660548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.190 [2024-11-27 04:34:16.660619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.190 [2024-11-27 04:34:16.660639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 [2024-11-27 04:34:16.800332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:29.190 [2024-11-27 04:34:16.802789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:29.190 [2024-11-27 04:34:16.802888] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:29.190 [2024-11-27 04:34:16.802968] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:29.190 [2024-11-27 04:34:16.802995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.190 [2024-11-27 04:34:16.803011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:29.190 request: 00:12:29.190 { 00:12:29.190 "name": "raid_bdev1", 00:12:29.190 "raid_level": "concat", 00:12:29.190 "base_bdevs": [ 00:12:29.190 "malloc1", 00:12:29.190 "malloc2" 00:12:29.190 ], 00:12:29.190 "strip_size_kb": 64, 00:12:29.190 "superblock": false, 00:12:29.190 "method": "bdev_raid_create", 00:12:29.190 "req_id": 1 00:12:29.190 } 00:12:29.190 Got JSON-RPC error response 00:12:29.190 response: 00:12:29.190 { 00:12:29.190 "code": -17, 00:12:29.190 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:29.190 } 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:29.190 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.448 [2024-11-27 04:34:16.868329] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:29.448 [2024-11-27 04:34:16.868551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.448 [2024-11-27 04:34:16.868623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:29.448 [2024-11-27 04:34:16.868732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.448 [2024-11-27 04:34:16.871679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.448 [2024-11-27 04:34:16.871846] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:29.448 [2024-11-27 04:34:16.871981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:29.448 [2024-11-27 04:34:16.872058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:29.448 pt1 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.448 "name": "raid_bdev1", 00:12:29.448 "uuid": "ebe7ead3-0bfb-4c06-9418-c2496e0206bc", 00:12:29.448 "strip_size_kb": 64, 00:12:29.448 "state": "configuring", 00:12:29.448 "raid_level": "concat", 00:12:29.448 "superblock": true, 00:12:29.448 "num_base_bdevs": 2, 00:12:29.448 "num_base_bdevs_discovered": 1, 00:12:29.448 "num_base_bdevs_operational": 2, 00:12:29.448 "base_bdevs_list": [ 00:12:29.448 { 00:12:29.448 "name": "pt1", 00:12:29.448 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:29.448 "is_configured": true, 00:12:29.448 "data_offset": 2048, 00:12:29.448 "data_size": 63488 00:12:29.448 }, 00:12:29.448 { 00:12:29.448 "name": null, 00:12:29.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.448 "is_configured": false, 00:12:29.448 "data_offset": 2048, 00:12:29.448 "data_size": 63488 00:12:29.448 } 00:12:29.448 ] 00:12:29.448 }' 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.448 04:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.014 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:30.014 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:30.014 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:30.014 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:30.014 04:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.014 04:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.014 [2024-11-27 04:34:17.388472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:30.014 [2024-11-27 04:34:17.388565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.014 [2024-11-27 04:34:17.388599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:30.014 [2024-11-27 04:34:17.388616] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.014 [2024-11-27 04:34:17.389205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.014 [2024-11-27 04:34:17.389249] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:30.014 [2024-11-27 04:34:17.389353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:30.014 [2024-11-27 04:34:17.389395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.014 [2024-11-27 04:34:17.389537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:30.014 [2024-11-27 04:34:17.389558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:30.014 [2024-11-27 04:34:17.389906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:30.014 [2024-11-27 04:34:17.390087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:30.014 [2024-11-27 04:34:17.390101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:30.014 [2024-11-27 04:34:17.390268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.014 pt2 00:12:30.014 04:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.014 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:30.014 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:30.014 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.015 "name": "raid_bdev1", 00:12:30.015 "uuid": "ebe7ead3-0bfb-4c06-9418-c2496e0206bc", 00:12:30.015 "strip_size_kb": 64, 00:12:30.015 "state": "online", 00:12:30.015 "raid_level": "concat", 00:12:30.015 "superblock": true, 00:12:30.015 "num_base_bdevs": 2, 00:12:30.015 "num_base_bdevs_discovered": 2, 00:12:30.015 "num_base_bdevs_operational": 2, 00:12:30.015 "base_bdevs_list": [ 00:12:30.015 { 00:12:30.015 "name": "pt1", 00:12:30.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:30.015 "is_configured": true, 00:12:30.015 "data_offset": 2048, 00:12:30.015 "data_size": 63488 00:12:30.015 }, 00:12:30.015 { 00:12:30.015 "name": "pt2", 00:12:30.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.015 "is_configured": true, 00:12:30.015 "data_offset": 2048, 00:12:30.015 "data_size": 63488 00:12:30.015 } 00:12:30.015 ] 00:12:30.015 }' 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.015 04:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.274 [2024-11-27 04:34:17.876909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.274 04:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.533 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:30.533 "name": "raid_bdev1", 00:12:30.533 "aliases": [ 00:12:30.533 "ebe7ead3-0bfb-4c06-9418-c2496e0206bc" 00:12:30.533 ], 00:12:30.533 "product_name": "Raid Volume", 00:12:30.533 "block_size": 512, 00:12:30.533 "num_blocks": 126976, 00:12:30.533 "uuid": "ebe7ead3-0bfb-4c06-9418-c2496e0206bc", 00:12:30.533 "assigned_rate_limits": { 00:12:30.533 "rw_ios_per_sec": 0, 00:12:30.533 "rw_mbytes_per_sec": 0, 00:12:30.533 "r_mbytes_per_sec": 0, 00:12:30.533 "w_mbytes_per_sec": 0 00:12:30.533 }, 00:12:30.533 "claimed": false, 00:12:30.533 "zoned": false, 00:12:30.533 "supported_io_types": { 00:12:30.533 "read": true, 00:12:30.533 "write": true, 00:12:30.533 "unmap": true, 00:12:30.533 "flush": true, 00:12:30.533 "reset": true, 00:12:30.533 "nvme_admin": false, 00:12:30.533 "nvme_io": false, 00:12:30.533 "nvme_io_md": false, 00:12:30.533 "write_zeroes": true, 00:12:30.533 "zcopy": false, 00:12:30.533 "get_zone_info": false, 00:12:30.533 "zone_management": false, 00:12:30.533 "zone_append": false, 00:12:30.533 "compare": false, 00:12:30.533 "compare_and_write": false, 00:12:30.533 "abort": false, 00:12:30.533 "seek_hole": false, 00:12:30.533 "seek_data": false, 00:12:30.533 "copy": false, 00:12:30.533 "nvme_iov_md": false 00:12:30.533 }, 00:12:30.533 "memory_domains": [ 00:12:30.533 { 00:12:30.533 "dma_device_id": "system", 00:12:30.533 "dma_device_type": 1 00:12:30.533 }, 00:12:30.533 { 00:12:30.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.533 "dma_device_type": 2 00:12:30.533 }, 00:12:30.533 { 00:12:30.533 "dma_device_id": "system", 00:12:30.533 "dma_device_type": 1 00:12:30.533 }, 00:12:30.533 { 00:12:30.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.533 "dma_device_type": 2 00:12:30.533 } 00:12:30.533 ], 00:12:30.533 "driver_specific": { 00:12:30.533 "raid": { 00:12:30.533 "uuid": "ebe7ead3-0bfb-4c06-9418-c2496e0206bc", 00:12:30.533 "strip_size_kb": 64, 00:12:30.533 "state": "online", 00:12:30.533 "raid_level": "concat", 00:12:30.533 "superblock": true, 00:12:30.533 "num_base_bdevs": 2, 00:12:30.533 "num_base_bdevs_discovered": 2, 00:12:30.533 "num_base_bdevs_operational": 2, 00:12:30.533 "base_bdevs_list": [ 00:12:30.533 { 00:12:30.533 "name": "pt1", 00:12:30.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:30.533 "is_configured": true, 00:12:30.533 "data_offset": 2048, 00:12:30.533 "data_size": 63488 00:12:30.533 }, 00:12:30.533 { 00:12:30.533 "name": "pt2", 00:12:30.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.533 "is_configured": true, 00:12:30.533 "data_offset": 2048, 00:12:30.533 "data_size": 63488 00:12:30.533 } 00:12:30.533 ] 00:12:30.533 } 00:12:30.533 } 00:12:30.533 }' 00:12:30.533 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:30.533 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:30.533 pt2' 00:12:30.533 04:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.533 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.791 [2024-11-27 04:34:18.160968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ebe7ead3-0bfb-4c06-9418-c2496e0206bc '!=' ebe7ead3-0bfb-4c06-9418-c2496e0206bc ']' 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62302 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62302 ']' 00:12:30.791 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62302 00:12:30.792 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:30.792 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.792 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62302 00:12:30.792 killing process with pid 62302 00:12:30.792 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.792 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.792 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62302' 00:12:30.792 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62302 00:12:30.792 [2024-11-27 04:34:18.240122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.792 04:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62302 00:12:30.792 [2024-11-27 04:34:18.240234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.792 [2024-11-27 04:34:18.240305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.792 [2024-11-27 04:34:18.240325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:31.050 [2024-11-27 04:34:18.428033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.985 04:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:31.985 00:12:31.985 real 0m4.815s 00:12:31.985 user 0m7.057s 00:12:31.985 sys 0m0.713s 00:12:31.985 04:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.985 ************************************ 00:12:31.985 END TEST raid_superblock_test 00:12:31.985 ************************************ 00:12:31.985 04:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.985 04:34:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:12:31.985 04:34:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:31.985 04:34:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.985 04:34:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.985 ************************************ 00:12:31.985 START TEST raid_read_error_test 00:12:31.985 ************************************ 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dBWkrSj2uv 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62513 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62513 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62513 ']' 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.985 04:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.243 [2024-11-27 04:34:19.647546] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:32.243 [2024-11-27 04:34:19.647718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62513 ] 00:12:32.243 [2024-11-27 04:34:19.836475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.503 [2024-11-27 04:34:19.968087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.762 [2024-11-27 04:34:20.173006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.762 [2024-11-27 04:34:20.173096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 BaseBdev1_malloc 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 true 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 [2024-11-27 04:34:20.708432] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:33.329 [2024-11-27 04:34:20.708499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.329 [2024-11-27 04:34:20.708528] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:33.329 [2024-11-27 04:34:20.708545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.329 [2024-11-27 04:34:20.711289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.329 [2024-11-27 04:34:20.711341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.329 BaseBdev1 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 BaseBdev2_malloc 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 true 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 [2024-11-27 04:34:20.764292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:33.329 [2024-11-27 04:34:20.764360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.329 [2024-11-27 04:34:20.764385] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:33.329 [2024-11-27 04:34:20.764402] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.329 [2024-11-27 04:34:20.767197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.329 [2024-11-27 04:34:20.767375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.329 BaseBdev2 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 [2024-11-27 04:34:20.772371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.330 [2024-11-27 04:34:20.774801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.330 [2024-11-27 04:34:20.775058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:33.330 [2024-11-27 04:34:20.775083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:33.330 [2024-11-27 04:34:20.775376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:33.330 [2024-11-27 04:34:20.775594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:33.330 [2024-11-27 04:34:20.775615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:33.330 [2024-11-27 04:34:20.775832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.330 "name": "raid_bdev1", 00:12:33.330 "uuid": "dbc6fe50-2ed9-4cd3-b79e-8c0af7304dd0", 00:12:33.330 "strip_size_kb": 64, 00:12:33.330 "state": "online", 00:12:33.330 "raid_level": "concat", 00:12:33.330 "superblock": true, 00:12:33.330 "num_base_bdevs": 2, 00:12:33.330 "num_base_bdevs_discovered": 2, 00:12:33.330 "num_base_bdevs_operational": 2, 00:12:33.330 "base_bdevs_list": [ 00:12:33.330 { 00:12:33.330 "name": "BaseBdev1", 00:12:33.330 "uuid": "80ed452f-a580-567b-8e1c-edf7938fba08", 00:12:33.330 "is_configured": true, 00:12:33.330 "data_offset": 2048, 00:12:33.330 "data_size": 63488 00:12:33.330 }, 00:12:33.330 { 00:12:33.330 "name": "BaseBdev2", 00:12:33.330 "uuid": "e5f2d1b2-6b0e-537c-9b0b-e272039ac032", 00:12:33.330 "is_configured": true, 00:12:33.330 "data_offset": 2048, 00:12:33.330 "data_size": 63488 00:12:33.330 } 00:12:33.330 ] 00:12:33.330 }' 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.330 04:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 04:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:33.897 04:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:33.897 [2024-11-27 04:34:21.417988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.836 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.836 "name": "raid_bdev1", 00:12:34.836 "uuid": "dbc6fe50-2ed9-4cd3-b79e-8c0af7304dd0", 00:12:34.836 "strip_size_kb": 64, 00:12:34.836 "state": "online", 00:12:34.836 "raid_level": "concat", 00:12:34.836 "superblock": true, 00:12:34.836 "num_base_bdevs": 2, 00:12:34.837 "num_base_bdevs_discovered": 2, 00:12:34.837 "num_base_bdevs_operational": 2, 00:12:34.837 "base_bdevs_list": [ 00:12:34.837 { 00:12:34.837 "name": "BaseBdev1", 00:12:34.837 "uuid": "80ed452f-a580-567b-8e1c-edf7938fba08", 00:12:34.837 "is_configured": true, 00:12:34.837 "data_offset": 2048, 00:12:34.837 "data_size": 63488 00:12:34.837 }, 00:12:34.837 { 00:12:34.837 "name": "BaseBdev2", 00:12:34.837 "uuid": "e5f2d1b2-6b0e-537c-9b0b-e272039ac032", 00:12:34.837 "is_configured": true, 00:12:34.837 "data_offset": 2048, 00:12:34.837 "data_size": 63488 00:12:34.837 } 00:12:34.837 ] 00:12:34.837 }' 00:12:34.837 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.837 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.404 [2024-11-27 04:34:22.833297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.404 [2024-11-27 04:34:22.833339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.404 [2024-11-27 04:34:22.836884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.404 [2024-11-27 04:34:22.837070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.404 [2024-11-27 04:34:22.837166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.404 [2024-11-27 04:34:22.837385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:35.404 { 00:12:35.404 "results": [ 00:12:35.404 { 00:12:35.404 "job": "raid_bdev1", 00:12:35.404 "core_mask": "0x1", 00:12:35.404 "workload": "randrw", 00:12:35.404 "percentage": 50, 00:12:35.404 "status": "finished", 00:12:35.404 "queue_depth": 1, 00:12:35.404 "io_size": 131072, 00:12:35.404 "runtime": 1.412812, 00:12:35.404 "iops": 10499.627692856517, 00:12:35.404 "mibps": 1312.4534616070646, 00:12:35.404 "io_failed": 1, 00:12:35.404 "io_timeout": 0, 00:12:35.404 "avg_latency_us": 132.32700162392376, 00:12:35.404 "min_latency_us": 43.28727272727273, 00:12:35.404 "max_latency_us": 1861.8181818181818 00:12:35.404 } 00:12:35.404 ], 00:12:35.404 "core_count": 1 00:12:35.404 } 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62513 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62513 ']' 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62513 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62513 00:12:35.404 killing process with pid 62513 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62513' 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62513 00:12:35.404 [2024-11-27 04:34:22.871328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.404 04:34:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62513 00:12:35.404 [2024-11-27 04:34:22.991205] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.781 04:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dBWkrSj2uv 00:12:36.781 04:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:36.781 04:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:36.781 04:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:36.781 04:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:36.781 04:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:36.781 04:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:36.781 ************************************ 00:12:36.781 END TEST raid_read_error_test 00:12:36.781 ************************************ 00:12:36.781 04:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:36.781 00:12:36.781 real 0m4.568s 00:12:36.781 user 0m5.750s 00:12:36.781 sys 0m0.547s 00:12:36.781 04:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.781 04:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.781 04:34:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:12:36.781 04:34:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:36.781 04:34:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.781 04:34:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.781 ************************************ 00:12:36.781 START TEST raid_write_error_test 00:12:36.781 ************************************ 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2ILzfc06fJ 00:12:36.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62659 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62659 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62659 ']' 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.781 04:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.781 [2024-11-27 04:34:24.271104] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:36.781 [2024-11-27 04:34:24.271296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62659 ] 00:12:37.040 [2024-11-27 04:34:24.458456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.040 [2024-11-27 04:34:24.614040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.298 [2024-11-27 04:34:24.831708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.299 [2024-11-27 04:34:24.831765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.867 BaseBdev1_malloc 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.867 true 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.867 [2024-11-27 04:34:25.312616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:37.867 [2024-11-27 04:34:25.312687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.867 [2024-11-27 04:34:25.312725] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:37.867 [2024-11-27 04:34:25.312743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.867 [2024-11-27 04:34:25.315550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.867 [2024-11-27 04:34:25.315736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:37.867 BaseBdev1 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.867 BaseBdev2_malloc 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.867 true 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.867 [2024-11-27 04:34:25.377360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:37.867 [2024-11-27 04:34:25.377433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.867 [2024-11-27 04:34:25.377460] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:37.867 [2024-11-27 04:34:25.377477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.867 [2024-11-27 04:34:25.380293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.867 [2024-11-27 04:34:25.380344] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:37.867 BaseBdev2 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.867 [2024-11-27 04:34:25.385428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.867 [2024-11-27 04:34:25.387929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.867 [2024-11-27 04:34:25.388188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:37.867 [2024-11-27 04:34:25.388213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:37.867 [2024-11-27 04:34:25.388517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:37.867 [2024-11-27 04:34:25.388739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:37.867 [2024-11-27 04:34:25.388762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:37.867 [2024-11-27 04:34:25.388980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.867 "name": "raid_bdev1", 00:12:37.867 "uuid": "c5558c97-9c98-4c0b-b957-1312987706c7", 00:12:37.867 "strip_size_kb": 64, 00:12:37.867 "state": "online", 00:12:37.867 "raid_level": "concat", 00:12:37.867 "superblock": true, 00:12:37.867 "num_base_bdevs": 2, 00:12:37.867 "num_base_bdevs_discovered": 2, 00:12:37.867 "num_base_bdevs_operational": 2, 00:12:37.867 "base_bdevs_list": [ 00:12:37.867 { 00:12:37.867 "name": "BaseBdev1", 00:12:37.867 "uuid": "ae83a30a-4a56-547c-8101-622f30a08f76", 00:12:37.867 "is_configured": true, 00:12:37.867 "data_offset": 2048, 00:12:37.867 "data_size": 63488 00:12:37.867 }, 00:12:37.867 { 00:12:37.867 "name": "BaseBdev2", 00:12:37.867 "uuid": "a021044f-744b-596c-991e-4070689147fb", 00:12:37.867 "is_configured": true, 00:12:37.867 "data_offset": 2048, 00:12:37.867 "data_size": 63488 00:12:37.867 } 00:12:37.867 ] 00:12:37.867 }' 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.867 04:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.433 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:38.433 04:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:38.433 [2024-11-27 04:34:25.974950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.374 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.374 "name": "raid_bdev1", 00:12:39.374 "uuid": "c5558c97-9c98-4c0b-b957-1312987706c7", 00:12:39.374 "strip_size_kb": 64, 00:12:39.374 "state": "online", 00:12:39.375 "raid_level": "concat", 00:12:39.375 "superblock": true, 00:12:39.375 "num_base_bdevs": 2, 00:12:39.375 "num_base_bdevs_discovered": 2, 00:12:39.375 "num_base_bdevs_operational": 2, 00:12:39.375 "base_bdevs_list": [ 00:12:39.375 { 00:12:39.375 "name": "BaseBdev1", 00:12:39.375 "uuid": "ae83a30a-4a56-547c-8101-622f30a08f76", 00:12:39.375 "is_configured": true, 00:12:39.375 "data_offset": 2048, 00:12:39.375 "data_size": 63488 00:12:39.375 }, 00:12:39.375 { 00:12:39.375 "name": "BaseBdev2", 00:12:39.375 "uuid": "a021044f-744b-596c-991e-4070689147fb", 00:12:39.375 "is_configured": true, 00:12:39.375 "data_offset": 2048, 00:12:39.375 "data_size": 63488 00:12:39.375 } 00:12:39.375 ] 00:12:39.375 }' 00:12:39.375 04:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.375 04:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.942 [2024-11-27 04:34:27.385978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:39.942 [2024-11-27 04:34:27.386023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.942 [2024-11-27 04:34:27.389457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.942 [2024-11-27 04:34:27.389516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.942 [2024-11-27 04:34:27.389560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.942 [2024-11-27 04:34:27.389582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:39.942 { 00:12:39.942 "results": [ 00:12:39.942 { 00:12:39.942 "job": "raid_bdev1", 00:12:39.942 "core_mask": "0x1", 00:12:39.942 "workload": "randrw", 00:12:39.942 "percentage": 50, 00:12:39.942 "status": "finished", 00:12:39.942 "queue_depth": 1, 00:12:39.942 "io_size": 131072, 00:12:39.942 "runtime": 1.408551, 00:12:39.942 "iops": 10330.47436692033, 00:12:39.942 "mibps": 1291.3092958650413, 00:12:39.942 "io_failed": 1, 00:12:39.942 "io_timeout": 0, 00:12:39.942 "avg_latency_us": 134.89730421310412, 00:12:39.942 "min_latency_us": 44.21818181818182, 00:12:39.942 "max_latency_us": 1832.0290909090909 00:12:39.942 } 00:12:39.942 ], 00:12:39.942 "core_count": 1 00:12:39.942 } 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62659 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62659 ']' 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62659 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62659 00:12:39.942 killing process with pid 62659 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62659' 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62659 00:12:39.942 [2024-11-27 04:34:27.427365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.942 04:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62659 00:12:39.942 [2024-11-27 04:34:27.550917] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.316 04:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:41.317 04:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2ILzfc06fJ 00:12:41.317 04:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:41.317 ************************************ 00:12:41.317 END TEST raid_write_error_test 00:12:41.317 ************************************ 00:12:41.317 04:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:41.317 04:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:41.317 04:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:41.317 04:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:41.317 04:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:41.317 00:12:41.317 real 0m4.513s 00:12:41.317 user 0m5.583s 00:12:41.317 sys 0m0.571s 00:12:41.317 04:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.317 04:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.317 04:34:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:41.317 04:34:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:12:41.317 04:34:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:41.317 04:34:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.317 04:34:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.317 ************************************ 00:12:41.317 START TEST raid_state_function_test 00:12:41.317 ************************************ 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.317 Process raid pid: 62797 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62797 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62797' 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62797 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62797 ']' 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.317 04:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.317 [2024-11-27 04:34:28.823686] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:41.317 [2024-11-27 04:34:28.824108] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.576 [2024-11-27 04:34:29.004452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.576 [2024-11-27 04:34:29.138216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.833 [2024-11-27 04:34:29.345571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.833 [2024-11-27 04:34:29.345844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.398 [2024-11-27 04:34:29.884903] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.398 [2024-11-27 04:34:29.884982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.398 [2024-11-27 04:34:29.885000] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.398 [2024-11-27 04:34:29.885017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.398 "name": "Existed_Raid", 00:12:42.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.398 "strip_size_kb": 0, 00:12:42.398 "state": "configuring", 00:12:42.398 "raid_level": "raid1", 00:12:42.398 "superblock": false, 00:12:42.398 "num_base_bdevs": 2, 00:12:42.398 "num_base_bdevs_discovered": 0, 00:12:42.398 "num_base_bdevs_operational": 2, 00:12:42.398 "base_bdevs_list": [ 00:12:42.398 { 00:12:42.398 "name": "BaseBdev1", 00:12:42.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.398 "is_configured": false, 00:12:42.398 "data_offset": 0, 00:12:42.398 "data_size": 0 00:12:42.398 }, 00:12:42.398 { 00:12:42.398 "name": "BaseBdev2", 00:12:42.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.398 "is_configured": false, 00:12:42.398 "data_offset": 0, 00:12:42.398 "data_size": 0 00:12:42.398 } 00:12:42.398 ] 00:12:42.398 }' 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.398 04:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.967 [2024-11-27 04:34:30.372990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.967 [2024-11-27 04:34:30.373175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.967 [2024-11-27 04:34:30.380977] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.967 [2024-11-27 04:34:30.381049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.967 [2024-11-27 04:34:30.381067] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.967 [2024-11-27 04:34:30.381087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.967 [2024-11-27 04:34:30.425567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.967 BaseBdev1 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.967 [ 00:12:42.967 { 00:12:42.967 "name": "BaseBdev1", 00:12:42.967 "aliases": [ 00:12:42.967 "a462afbc-2f3d-4abc-84ad-2a34c584e1fd" 00:12:42.967 ], 00:12:42.967 "product_name": "Malloc disk", 00:12:42.967 "block_size": 512, 00:12:42.967 "num_blocks": 65536, 00:12:42.967 "uuid": "a462afbc-2f3d-4abc-84ad-2a34c584e1fd", 00:12:42.967 "assigned_rate_limits": { 00:12:42.967 "rw_ios_per_sec": 0, 00:12:42.967 "rw_mbytes_per_sec": 0, 00:12:42.967 "r_mbytes_per_sec": 0, 00:12:42.967 "w_mbytes_per_sec": 0 00:12:42.967 }, 00:12:42.967 "claimed": true, 00:12:42.967 "claim_type": "exclusive_write", 00:12:42.967 "zoned": false, 00:12:42.967 "supported_io_types": { 00:12:42.967 "read": true, 00:12:42.967 "write": true, 00:12:42.967 "unmap": true, 00:12:42.967 "flush": true, 00:12:42.967 "reset": true, 00:12:42.967 "nvme_admin": false, 00:12:42.967 "nvme_io": false, 00:12:42.967 "nvme_io_md": false, 00:12:42.967 "write_zeroes": true, 00:12:42.967 "zcopy": true, 00:12:42.967 "get_zone_info": false, 00:12:42.967 "zone_management": false, 00:12:42.967 "zone_append": false, 00:12:42.967 "compare": false, 00:12:42.967 "compare_and_write": false, 00:12:42.967 "abort": true, 00:12:42.967 "seek_hole": false, 00:12:42.967 "seek_data": false, 00:12:42.967 "copy": true, 00:12:42.967 "nvme_iov_md": false 00:12:42.967 }, 00:12:42.967 "memory_domains": [ 00:12:42.967 { 00:12:42.967 "dma_device_id": "system", 00:12:42.967 "dma_device_type": 1 00:12:42.967 }, 00:12:42.967 { 00:12:42.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.967 "dma_device_type": 2 00:12:42.967 } 00:12:42.967 ], 00:12:42.967 "driver_specific": {} 00:12:42.967 } 00:12:42.967 ] 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.967 "name": "Existed_Raid", 00:12:42.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.967 "strip_size_kb": 0, 00:12:42.967 "state": "configuring", 00:12:42.967 "raid_level": "raid1", 00:12:42.967 "superblock": false, 00:12:42.967 "num_base_bdevs": 2, 00:12:42.967 "num_base_bdevs_discovered": 1, 00:12:42.967 "num_base_bdevs_operational": 2, 00:12:42.967 "base_bdevs_list": [ 00:12:42.967 { 00:12:42.967 "name": "BaseBdev1", 00:12:42.967 "uuid": "a462afbc-2f3d-4abc-84ad-2a34c584e1fd", 00:12:42.967 "is_configured": true, 00:12:42.967 "data_offset": 0, 00:12:42.967 "data_size": 65536 00:12:42.967 }, 00:12:42.967 { 00:12:42.967 "name": "BaseBdev2", 00:12:42.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.967 "is_configured": false, 00:12:42.967 "data_offset": 0, 00:12:42.967 "data_size": 0 00:12:42.967 } 00:12:42.967 ] 00:12:42.967 }' 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.967 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.535 [2024-11-27 04:34:30.981783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.535 [2024-11-27 04:34:30.981856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.535 [2024-11-27 04:34:30.989804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.535 [2024-11-27 04:34:30.992200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.535 [2024-11-27 04:34:30.992252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.535 04:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.535 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.535 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.535 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.535 "name": "Existed_Raid", 00:12:43.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.535 "strip_size_kb": 0, 00:12:43.535 "state": "configuring", 00:12:43.535 "raid_level": "raid1", 00:12:43.535 "superblock": false, 00:12:43.535 "num_base_bdevs": 2, 00:12:43.535 "num_base_bdevs_discovered": 1, 00:12:43.535 "num_base_bdevs_operational": 2, 00:12:43.535 "base_bdevs_list": [ 00:12:43.535 { 00:12:43.535 "name": "BaseBdev1", 00:12:43.535 "uuid": "a462afbc-2f3d-4abc-84ad-2a34c584e1fd", 00:12:43.535 "is_configured": true, 00:12:43.535 "data_offset": 0, 00:12:43.535 "data_size": 65536 00:12:43.535 }, 00:12:43.535 { 00:12:43.535 "name": "BaseBdev2", 00:12:43.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.535 "is_configured": false, 00:12:43.535 "data_offset": 0, 00:12:43.535 "data_size": 0 00:12:43.535 } 00:12:43.535 ] 00:12:43.535 }' 00:12:43.535 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.535 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.104 [2024-11-27 04:34:31.557645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.104 [2024-11-27 04:34:31.557746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:44.104 [2024-11-27 04:34:31.557771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:44.104 [2024-11-27 04:34:31.558157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:44.104 [2024-11-27 04:34:31.558402] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:44.104 [2024-11-27 04:34:31.558542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:44.104 [2024-11-27 04:34:31.558901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.104 BaseBdev2 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.104 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.104 [ 00:12:44.104 { 00:12:44.104 "name": "BaseBdev2", 00:12:44.104 "aliases": [ 00:12:44.104 "bcf35662-1805-4bc6-a9cc-8e1d83e4c50c" 00:12:44.104 ], 00:12:44.104 "product_name": "Malloc disk", 00:12:44.104 "block_size": 512, 00:12:44.104 "num_blocks": 65536, 00:12:44.104 "uuid": "bcf35662-1805-4bc6-a9cc-8e1d83e4c50c", 00:12:44.104 "assigned_rate_limits": { 00:12:44.104 "rw_ios_per_sec": 0, 00:12:44.104 "rw_mbytes_per_sec": 0, 00:12:44.104 "r_mbytes_per_sec": 0, 00:12:44.104 "w_mbytes_per_sec": 0 00:12:44.104 }, 00:12:44.104 "claimed": true, 00:12:44.104 "claim_type": "exclusive_write", 00:12:44.104 "zoned": false, 00:12:44.104 "supported_io_types": { 00:12:44.104 "read": true, 00:12:44.104 "write": true, 00:12:44.104 "unmap": true, 00:12:44.104 "flush": true, 00:12:44.104 "reset": true, 00:12:44.104 "nvme_admin": false, 00:12:44.104 "nvme_io": false, 00:12:44.104 "nvme_io_md": false, 00:12:44.104 "write_zeroes": true, 00:12:44.104 "zcopy": true, 00:12:44.104 "get_zone_info": false, 00:12:44.104 "zone_management": false, 00:12:44.104 "zone_append": false, 00:12:44.104 "compare": false, 00:12:44.104 "compare_and_write": false, 00:12:44.104 "abort": true, 00:12:44.104 "seek_hole": false, 00:12:44.104 "seek_data": false, 00:12:44.104 "copy": true, 00:12:44.104 "nvme_iov_md": false 00:12:44.104 }, 00:12:44.104 "memory_domains": [ 00:12:44.104 { 00:12:44.104 "dma_device_id": "system", 00:12:44.104 "dma_device_type": 1 00:12:44.104 }, 00:12:44.104 { 00:12:44.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.105 "dma_device_type": 2 00:12:44.105 } 00:12:44.105 ], 00:12:44.105 "driver_specific": {} 00:12:44.105 } 00:12:44.105 ] 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.105 "name": "Existed_Raid", 00:12:44.105 "uuid": "c2812d22-db8e-4a21-9976-de7b21d88502", 00:12:44.105 "strip_size_kb": 0, 00:12:44.105 "state": "online", 00:12:44.105 "raid_level": "raid1", 00:12:44.105 "superblock": false, 00:12:44.105 "num_base_bdevs": 2, 00:12:44.105 "num_base_bdevs_discovered": 2, 00:12:44.105 "num_base_bdevs_operational": 2, 00:12:44.105 "base_bdevs_list": [ 00:12:44.105 { 00:12:44.105 "name": "BaseBdev1", 00:12:44.105 "uuid": "a462afbc-2f3d-4abc-84ad-2a34c584e1fd", 00:12:44.105 "is_configured": true, 00:12:44.105 "data_offset": 0, 00:12:44.105 "data_size": 65536 00:12:44.105 }, 00:12:44.105 { 00:12:44.105 "name": "BaseBdev2", 00:12:44.105 "uuid": "bcf35662-1805-4bc6-a9cc-8e1d83e4c50c", 00:12:44.105 "is_configured": true, 00:12:44.105 "data_offset": 0, 00:12:44.105 "data_size": 65536 00:12:44.105 } 00:12:44.105 ] 00:12:44.105 }' 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.105 04:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.672 [2024-11-27 04:34:32.122266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.672 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:44.672 "name": "Existed_Raid", 00:12:44.672 "aliases": [ 00:12:44.672 "c2812d22-db8e-4a21-9976-de7b21d88502" 00:12:44.672 ], 00:12:44.672 "product_name": "Raid Volume", 00:12:44.672 "block_size": 512, 00:12:44.672 "num_blocks": 65536, 00:12:44.672 "uuid": "c2812d22-db8e-4a21-9976-de7b21d88502", 00:12:44.672 "assigned_rate_limits": { 00:12:44.672 "rw_ios_per_sec": 0, 00:12:44.672 "rw_mbytes_per_sec": 0, 00:12:44.672 "r_mbytes_per_sec": 0, 00:12:44.672 "w_mbytes_per_sec": 0 00:12:44.672 }, 00:12:44.672 "claimed": false, 00:12:44.672 "zoned": false, 00:12:44.672 "supported_io_types": { 00:12:44.672 "read": true, 00:12:44.672 "write": true, 00:12:44.672 "unmap": false, 00:12:44.672 "flush": false, 00:12:44.672 "reset": true, 00:12:44.672 "nvme_admin": false, 00:12:44.672 "nvme_io": false, 00:12:44.672 "nvme_io_md": false, 00:12:44.672 "write_zeroes": true, 00:12:44.672 "zcopy": false, 00:12:44.672 "get_zone_info": false, 00:12:44.672 "zone_management": false, 00:12:44.672 "zone_append": false, 00:12:44.672 "compare": false, 00:12:44.672 "compare_and_write": false, 00:12:44.672 "abort": false, 00:12:44.672 "seek_hole": false, 00:12:44.672 "seek_data": false, 00:12:44.672 "copy": false, 00:12:44.672 "nvme_iov_md": false 00:12:44.672 }, 00:12:44.672 "memory_domains": [ 00:12:44.672 { 00:12:44.672 "dma_device_id": "system", 00:12:44.672 "dma_device_type": 1 00:12:44.672 }, 00:12:44.672 { 00:12:44.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.672 "dma_device_type": 2 00:12:44.672 }, 00:12:44.672 { 00:12:44.672 "dma_device_id": "system", 00:12:44.672 "dma_device_type": 1 00:12:44.672 }, 00:12:44.672 { 00:12:44.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.672 "dma_device_type": 2 00:12:44.672 } 00:12:44.672 ], 00:12:44.673 "driver_specific": { 00:12:44.673 "raid": { 00:12:44.673 "uuid": "c2812d22-db8e-4a21-9976-de7b21d88502", 00:12:44.673 "strip_size_kb": 0, 00:12:44.673 "state": "online", 00:12:44.673 "raid_level": "raid1", 00:12:44.673 "superblock": false, 00:12:44.673 "num_base_bdevs": 2, 00:12:44.673 "num_base_bdevs_discovered": 2, 00:12:44.673 "num_base_bdevs_operational": 2, 00:12:44.673 "base_bdevs_list": [ 00:12:44.673 { 00:12:44.673 "name": "BaseBdev1", 00:12:44.673 "uuid": "a462afbc-2f3d-4abc-84ad-2a34c584e1fd", 00:12:44.673 "is_configured": true, 00:12:44.673 "data_offset": 0, 00:12:44.673 "data_size": 65536 00:12:44.673 }, 00:12:44.673 { 00:12:44.673 "name": "BaseBdev2", 00:12:44.673 "uuid": "bcf35662-1805-4bc6-a9cc-8e1d83e4c50c", 00:12:44.673 "is_configured": true, 00:12:44.673 "data_offset": 0, 00:12:44.673 "data_size": 65536 00:12:44.673 } 00:12:44.673 ] 00:12:44.673 } 00:12:44.673 } 00:12:44.673 }' 00:12:44.673 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.673 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:44.673 BaseBdev2' 00:12:44.673 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.673 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:44.673 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.673 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:44.673 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.673 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.673 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.931 [2024-11-27 04:34:32.390075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.931 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.932 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.932 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.932 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.932 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.932 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.932 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.932 "name": "Existed_Raid", 00:12:44.932 "uuid": "c2812d22-db8e-4a21-9976-de7b21d88502", 00:12:44.932 "strip_size_kb": 0, 00:12:44.932 "state": "online", 00:12:44.932 "raid_level": "raid1", 00:12:44.932 "superblock": false, 00:12:44.932 "num_base_bdevs": 2, 00:12:44.932 "num_base_bdevs_discovered": 1, 00:12:44.932 "num_base_bdevs_operational": 1, 00:12:44.932 "base_bdevs_list": [ 00:12:44.932 { 00:12:44.932 "name": null, 00:12:44.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.932 "is_configured": false, 00:12:44.932 "data_offset": 0, 00:12:44.932 "data_size": 65536 00:12:44.932 }, 00:12:44.932 { 00:12:44.932 "name": "BaseBdev2", 00:12:44.932 "uuid": "bcf35662-1805-4bc6-a9cc-8e1d83e4c50c", 00:12:44.932 "is_configured": true, 00:12:44.932 "data_offset": 0, 00:12:44.932 "data_size": 65536 00:12:44.932 } 00:12:44.932 ] 00:12:44.932 }' 00:12:44.932 04:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.932 04:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.498 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.498 [2024-11-27 04:34:33.073917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:45.498 [2024-11-27 04:34:33.074175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.756 [2024-11-27 04:34:33.162740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.756 [2024-11-27 04:34:33.162830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.756 [2024-11-27 04:34:33.162883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62797 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62797 ']' 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62797 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62797 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.756 killing process with pid 62797 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62797' 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62797 00:12:45.756 [2024-11-27 04:34:33.249948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.756 04:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62797 00:12:45.756 [2024-11-27 04:34:33.264983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.132 ************************************ 00:12:47.132 END TEST raid_state_function_test 00:12:47.132 ************************************ 00:12:47.132 04:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:47.133 00:12:47.133 real 0m5.624s 00:12:47.133 user 0m8.506s 00:12:47.133 sys 0m0.785s 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.133 04:34:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:12:47.133 04:34:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:47.133 04:34:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.133 04:34:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.133 ************************************ 00:12:47.133 START TEST raid_state_function_test_sb 00:12:47.133 ************************************ 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:47.133 Process raid pid: 63061 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63061 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63061' 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63061 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63061 ']' 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.133 04:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.133 [2024-11-27 04:34:34.510333] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:47.133 [2024-11-27 04:34:34.510749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.133 [2024-11-27 04:34:34.700407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.394 [2024-11-27 04:34:34.851785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.652 [2024-11-27 04:34:35.064188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.652 [2024-11-27 04:34:35.064423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.219 [2024-11-27 04:34:35.555031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.219 [2024-11-27 04:34:35.555299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.219 [2024-11-27 04:34:35.555455] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.219 [2024-11-27 04:34:35.555591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.219 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.220 04:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.220 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.220 "name": "Existed_Raid", 00:12:48.220 "uuid": "27a32243-c7ef-4b7f-ae1d-35a5146fb160", 00:12:48.220 "strip_size_kb": 0, 00:12:48.220 "state": "configuring", 00:12:48.220 "raid_level": "raid1", 00:12:48.220 "superblock": true, 00:12:48.220 "num_base_bdevs": 2, 00:12:48.220 "num_base_bdevs_discovered": 0, 00:12:48.220 "num_base_bdevs_operational": 2, 00:12:48.220 "base_bdevs_list": [ 00:12:48.220 { 00:12:48.220 "name": "BaseBdev1", 00:12:48.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.220 "is_configured": false, 00:12:48.220 "data_offset": 0, 00:12:48.220 "data_size": 0 00:12:48.220 }, 00:12:48.220 { 00:12:48.220 "name": "BaseBdev2", 00:12:48.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.220 "is_configured": false, 00:12:48.220 "data_offset": 0, 00:12:48.220 "data_size": 0 00:12:48.220 } 00:12:48.220 ] 00:12:48.220 }' 00:12:48.220 04:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.220 04:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.478 [2024-11-27 04:34:36.071198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.478 [2024-11-27 04:34:36.071239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.478 [2024-11-27 04:34:36.079239] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.478 [2024-11-27 04:34:36.079410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.478 [2024-11-27 04:34:36.079532] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.478 [2024-11-27 04:34:36.079597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.478 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.737 [2024-11-27 04:34:36.126862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.737 BaseBdev1 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.737 [ 00:12:48.737 { 00:12:48.737 "name": "BaseBdev1", 00:12:48.737 "aliases": [ 00:12:48.737 "cd338f32-a62e-4453-84f6-0aadef434b51" 00:12:48.737 ], 00:12:48.737 "product_name": "Malloc disk", 00:12:48.737 "block_size": 512, 00:12:48.737 "num_blocks": 65536, 00:12:48.737 "uuid": "cd338f32-a62e-4453-84f6-0aadef434b51", 00:12:48.737 "assigned_rate_limits": { 00:12:48.737 "rw_ios_per_sec": 0, 00:12:48.737 "rw_mbytes_per_sec": 0, 00:12:48.737 "r_mbytes_per_sec": 0, 00:12:48.737 "w_mbytes_per_sec": 0 00:12:48.737 }, 00:12:48.737 "claimed": true, 00:12:48.737 "claim_type": "exclusive_write", 00:12:48.737 "zoned": false, 00:12:48.737 "supported_io_types": { 00:12:48.737 "read": true, 00:12:48.737 "write": true, 00:12:48.737 "unmap": true, 00:12:48.737 "flush": true, 00:12:48.737 "reset": true, 00:12:48.737 "nvme_admin": false, 00:12:48.737 "nvme_io": false, 00:12:48.737 "nvme_io_md": false, 00:12:48.737 "write_zeroes": true, 00:12:48.737 "zcopy": true, 00:12:48.737 "get_zone_info": false, 00:12:48.737 "zone_management": false, 00:12:48.737 "zone_append": false, 00:12:48.737 "compare": false, 00:12:48.737 "compare_and_write": false, 00:12:48.737 "abort": true, 00:12:48.737 "seek_hole": false, 00:12:48.737 "seek_data": false, 00:12:48.737 "copy": true, 00:12:48.737 "nvme_iov_md": false 00:12:48.737 }, 00:12:48.737 "memory_domains": [ 00:12:48.737 { 00:12:48.737 "dma_device_id": "system", 00:12:48.737 "dma_device_type": 1 00:12:48.737 }, 00:12:48.737 { 00:12:48.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.737 "dma_device_type": 2 00:12:48.737 } 00:12:48.737 ], 00:12:48.737 "driver_specific": {} 00:12:48.737 } 00:12:48.737 ] 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.737 "name": "Existed_Raid", 00:12:48.737 "uuid": "76d410ce-9b05-4bdf-bf2c-1496f031c2cb", 00:12:48.737 "strip_size_kb": 0, 00:12:48.737 "state": "configuring", 00:12:48.737 "raid_level": "raid1", 00:12:48.737 "superblock": true, 00:12:48.737 "num_base_bdevs": 2, 00:12:48.737 "num_base_bdevs_discovered": 1, 00:12:48.737 "num_base_bdevs_operational": 2, 00:12:48.737 "base_bdevs_list": [ 00:12:48.737 { 00:12:48.737 "name": "BaseBdev1", 00:12:48.737 "uuid": "cd338f32-a62e-4453-84f6-0aadef434b51", 00:12:48.737 "is_configured": true, 00:12:48.737 "data_offset": 2048, 00:12:48.737 "data_size": 63488 00:12:48.737 }, 00:12:48.737 { 00:12:48.737 "name": "BaseBdev2", 00:12:48.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.737 "is_configured": false, 00:12:48.737 "data_offset": 0, 00:12:48.737 "data_size": 0 00:12:48.737 } 00:12:48.737 ] 00:12:48.737 }' 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.737 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.304 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.304 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.304 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.304 [2024-11-27 04:34:36.675108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.304 [2024-11-27 04:34:36.675171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:49.304 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.305 [2024-11-27 04:34:36.683127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.305 [2024-11-27 04:34:36.685590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.305 [2024-11-27 04:34:36.685647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.305 "name": "Existed_Raid", 00:12:49.305 "uuid": "d1a3699b-a556-4ec6-b58e-4472f2cd2974", 00:12:49.305 "strip_size_kb": 0, 00:12:49.305 "state": "configuring", 00:12:49.305 "raid_level": "raid1", 00:12:49.305 "superblock": true, 00:12:49.305 "num_base_bdevs": 2, 00:12:49.305 "num_base_bdevs_discovered": 1, 00:12:49.305 "num_base_bdevs_operational": 2, 00:12:49.305 "base_bdevs_list": [ 00:12:49.305 { 00:12:49.305 "name": "BaseBdev1", 00:12:49.305 "uuid": "cd338f32-a62e-4453-84f6-0aadef434b51", 00:12:49.305 "is_configured": true, 00:12:49.305 "data_offset": 2048, 00:12:49.305 "data_size": 63488 00:12:49.305 }, 00:12:49.305 { 00:12:49.305 "name": "BaseBdev2", 00:12:49.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.305 "is_configured": false, 00:12:49.305 "data_offset": 0, 00:12:49.305 "data_size": 0 00:12:49.305 } 00:12:49.305 ] 00:12:49.305 }' 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.305 04:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.872 [2024-11-27 04:34:37.231155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.872 [2024-11-27 04:34:37.231484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:49.872 [2024-11-27 04:34:37.231505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:49.872 BaseBdev2 00:12:49.872 [2024-11-27 04:34:37.231839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:49.872 [2024-11-27 04:34:37.232075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:49.872 [2024-11-27 04:34:37.232099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:49.872 [2024-11-27 04:34:37.232273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.872 [ 00:12:49.872 { 00:12:49.872 "name": "BaseBdev2", 00:12:49.872 "aliases": [ 00:12:49.872 "0501ad2c-3b67-48d8-b58b-2ae09cb64994" 00:12:49.872 ], 00:12:49.872 "product_name": "Malloc disk", 00:12:49.872 "block_size": 512, 00:12:49.872 "num_blocks": 65536, 00:12:49.872 "uuid": "0501ad2c-3b67-48d8-b58b-2ae09cb64994", 00:12:49.872 "assigned_rate_limits": { 00:12:49.872 "rw_ios_per_sec": 0, 00:12:49.872 "rw_mbytes_per_sec": 0, 00:12:49.872 "r_mbytes_per_sec": 0, 00:12:49.872 "w_mbytes_per_sec": 0 00:12:49.872 }, 00:12:49.872 "claimed": true, 00:12:49.872 "claim_type": "exclusive_write", 00:12:49.872 "zoned": false, 00:12:49.872 "supported_io_types": { 00:12:49.872 "read": true, 00:12:49.872 "write": true, 00:12:49.872 "unmap": true, 00:12:49.872 "flush": true, 00:12:49.872 "reset": true, 00:12:49.872 "nvme_admin": false, 00:12:49.872 "nvme_io": false, 00:12:49.872 "nvme_io_md": false, 00:12:49.872 "write_zeroes": true, 00:12:49.872 "zcopy": true, 00:12:49.872 "get_zone_info": false, 00:12:49.872 "zone_management": false, 00:12:49.872 "zone_append": false, 00:12:49.872 "compare": false, 00:12:49.872 "compare_and_write": false, 00:12:49.872 "abort": true, 00:12:49.872 "seek_hole": false, 00:12:49.872 "seek_data": false, 00:12:49.872 "copy": true, 00:12:49.872 "nvme_iov_md": false 00:12:49.872 }, 00:12:49.872 "memory_domains": [ 00:12:49.872 { 00:12:49.872 "dma_device_id": "system", 00:12:49.872 "dma_device_type": 1 00:12:49.872 }, 00:12:49.872 { 00:12:49.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.872 "dma_device_type": 2 00:12:49.872 } 00:12:49.872 ], 00:12:49.872 "driver_specific": {} 00:12:49.872 } 00:12:49.872 ] 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.872 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.872 "name": "Existed_Raid", 00:12:49.872 "uuid": "d1a3699b-a556-4ec6-b58e-4472f2cd2974", 00:12:49.872 "strip_size_kb": 0, 00:12:49.872 "state": "online", 00:12:49.872 "raid_level": "raid1", 00:12:49.872 "superblock": true, 00:12:49.872 "num_base_bdevs": 2, 00:12:49.872 "num_base_bdevs_discovered": 2, 00:12:49.872 "num_base_bdevs_operational": 2, 00:12:49.872 "base_bdevs_list": [ 00:12:49.872 { 00:12:49.873 "name": "BaseBdev1", 00:12:49.873 "uuid": "cd338f32-a62e-4453-84f6-0aadef434b51", 00:12:49.873 "is_configured": true, 00:12:49.873 "data_offset": 2048, 00:12:49.873 "data_size": 63488 00:12:49.873 }, 00:12:49.873 { 00:12:49.873 "name": "BaseBdev2", 00:12:49.873 "uuid": "0501ad2c-3b67-48d8-b58b-2ae09cb64994", 00:12:49.873 "is_configured": true, 00:12:49.873 "data_offset": 2048, 00:12:49.873 "data_size": 63488 00:12:49.873 } 00:12:49.873 ] 00:12:49.873 }' 00:12:49.873 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.873 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.439 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.439 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.440 [2024-11-27 04:34:37.787797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:50.440 "name": "Existed_Raid", 00:12:50.440 "aliases": [ 00:12:50.440 "d1a3699b-a556-4ec6-b58e-4472f2cd2974" 00:12:50.440 ], 00:12:50.440 "product_name": "Raid Volume", 00:12:50.440 "block_size": 512, 00:12:50.440 "num_blocks": 63488, 00:12:50.440 "uuid": "d1a3699b-a556-4ec6-b58e-4472f2cd2974", 00:12:50.440 "assigned_rate_limits": { 00:12:50.440 "rw_ios_per_sec": 0, 00:12:50.440 "rw_mbytes_per_sec": 0, 00:12:50.440 "r_mbytes_per_sec": 0, 00:12:50.440 "w_mbytes_per_sec": 0 00:12:50.440 }, 00:12:50.440 "claimed": false, 00:12:50.440 "zoned": false, 00:12:50.440 "supported_io_types": { 00:12:50.440 "read": true, 00:12:50.440 "write": true, 00:12:50.440 "unmap": false, 00:12:50.440 "flush": false, 00:12:50.440 "reset": true, 00:12:50.440 "nvme_admin": false, 00:12:50.440 "nvme_io": false, 00:12:50.440 "nvme_io_md": false, 00:12:50.440 "write_zeroes": true, 00:12:50.440 "zcopy": false, 00:12:50.440 "get_zone_info": false, 00:12:50.440 "zone_management": false, 00:12:50.440 "zone_append": false, 00:12:50.440 "compare": false, 00:12:50.440 "compare_and_write": false, 00:12:50.440 "abort": false, 00:12:50.440 "seek_hole": false, 00:12:50.440 "seek_data": false, 00:12:50.440 "copy": false, 00:12:50.440 "nvme_iov_md": false 00:12:50.440 }, 00:12:50.440 "memory_domains": [ 00:12:50.440 { 00:12:50.440 "dma_device_id": "system", 00:12:50.440 "dma_device_type": 1 00:12:50.440 }, 00:12:50.440 { 00:12:50.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.440 "dma_device_type": 2 00:12:50.440 }, 00:12:50.440 { 00:12:50.440 "dma_device_id": "system", 00:12:50.440 "dma_device_type": 1 00:12:50.440 }, 00:12:50.440 { 00:12:50.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.440 "dma_device_type": 2 00:12:50.440 } 00:12:50.440 ], 00:12:50.440 "driver_specific": { 00:12:50.440 "raid": { 00:12:50.440 "uuid": "d1a3699b-a556-4ec6-b58e-4472f2cd2974", 00:12:50.440 "strip_size_kb": 0, 00:12:50.440 "state": "online", 00:12:50.440 "raid_level": "raid1", 00:12:50.440 "superblock": true, 00:12:50.440 "num_base_bdevs": 2, 00:12:50.440 "num_base_bdevs_discovered": 2, 00:12:50.440 "num_base_bdevs_operational": 2, 00:12:50.440 "base_bdevs_list": [ 00:12:50.440 { 00:12:50.440 "name": "BaseBdev1", 00:12:50.440 "uuid": "cd338f32-a62e-4453-84f6-0aadef434b51", 00:12:50.440 "is_configured": true, 00:12:50.440 "data_offset": 2048, 00:12:50.440 "data_size": 63488 00:12:50.440 }, 00:12:50.440 { 00:12:50.440 "name": "BaseBdev2", 00:12:50.440 "uuid": "0501ad2c-3b67-48d8-b58b-2ae09cb64994", 00:12:50.440 "is_configured": true, 00:12:50.440 "data_offset": 2048, 00:12:50.440 "data_size": 63488 00:12:50.440 } 00:12:50.440 ] 00:12:50.440 } 00:12:50.440 } 00:12:50.440 }' 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:50.440 BaseBdev2' 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.440 04:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.440 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.440 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.440 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.440 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:50.440 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.440 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.440 [2024-11-27 04:34:38.047521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.698 "name": "Existed_Raid", 00:12:50.698 "uuid": "d1a3699b-a556-4ec6-b58e-4472f2cd2974", 00:12:50.698 "strip_size_kb": 0, 00:12:50.698 "state": "online", 00:12:50.698 "raid_level": "raid1", 00:12:50.698 "superblock": true, 00:12:50.698 "num_base_bdevs": 2, 00:12:50.698 "num_base_bdevs_discovered": 1, 00:12:50.698 "num_base_bdevs_operational": 1, 00:12:50.698 "base_bdevs_list": [ 00:12:50.698 { 00:12:50.698 "name": null, 00:12:50.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.698 "is_configured": false, 00:12:50.698 "data_offset": 0, 00:12:50.698 "data_size": 63488 00:12:50.698 }, 00:12:50.698 { 00:12:50.698 "name": "BaseBdev2", 00:12:50.698 "uuid": "0501ad2c-3b67-48d8-b58b-2ae09cb64994", 00:12:50.698 "is_configured": true, 00:12:50.698 "data_offset": 2048, 00:12:50.698 "data_size": 63488 00:12:50.698 } 00:12:50.698 ] 00:12:50.698 }' 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.698 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.266 [2024-11-27 04:34:38.711072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.266 [2024-11-27 04:34:38.711213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.266 [2024-11-27 04:34:38.800499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.266 [2024-11-27 04:34:38.800579] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.266 [2024-11-27 04:34:38.800602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.266 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63061 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63061 ']' 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63061 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63061 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63061' 00:12:51.267 killing process with pid 63061 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63061 00:12:51.267 [2024-11-27 04:34:38.886123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.267 04:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63061 00:12:51.525 [2024-11-27 04:34:38.900996] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.459 04:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:52.459 00:12:52.459 real 0m5.571s 00:12:52.459 user 0m8.359s 00:12:52.459 sys 0m0.837s 00:12:52.459 04:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.459 ************************************ 00:12:52.459 END TEST raid_state_function_test_sb 00:12:52.459 ************************************ 00:12:52.459 04:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.459 04:34:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:12:52.459 04:34:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:52.459 04:34:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.459 04:34:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.459 ************************************ 00:12:52.459 START TEST raid_superblock_test 00:12:52.459 ************************************ 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:52.459 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:52.460 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63313 00:12:52.460 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63313 00:12:52.460 04:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63313 ']' 00:12:52.460 04:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.460 04:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:52.460 04:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.460 04:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.460 04:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.460 04:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.717 [2024-11-27 04:34:40.129876] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:52.717 [2024-11-27 04:34:40.130301] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63313 ] 00:12:52.717 [2024-11-27 04:34:40.308169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.975 [2024-11-27 04:34:40.440021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.233 [2024-11-27 04:34:40.642826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.233 [2024-11-27 04:34:40.642907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.491 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.749 malloc1 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.749 [2024-11-27 04:34:41.126745] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:53.749 [2024-11-27 04:34:41.126974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.749 [2024-11-27 04:34:41.127055] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:53.749 [2024-11-27 04:34:41.127250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.749 [2024-11-27 04:34:41.130139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.749 [2024-11-27 04:34:41.130309] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:53.749 pt1 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.749 malloc2 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.749 [2024-11-27 04:34:41.183547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:53.749 [2024-11-27 04:34:41.183620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.749 [2024-11-27 04:34:41.183658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:53.749 [2024-11-27 04:34:41.183674] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.749 [2024-11-27 04:34:41.186542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.749 [2024-11-27 04:34:41.186589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:53.749 pt2 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.749 [2024-11-27 04:34:41.195640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:53.749 [2024-11-27 04:34:41.198113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:53.749 [2024-11-27 04:34:41.198460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:53.749 [2024-11-27 04:34:41.198492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:53.749 [2024-11-27 04:34:41.198832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:53.749 [2024-11-27 04:34:41.199045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:53.749 [2024-11-27 04:34:41.199071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:53.749 [2024-11-27 04:34:41.199277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.749 "name": "raid_bdev1", 00:12:53.749 "uuid": "651476e7-57c7-47fc-a923-e6c9b675a5b6", 00:12:53.749 "strip_size_kb": 0, 00:12:53.749 "state": "online", 00:12:53.749 "raid_level": "raid1", 00:12:53.749 "superblock": true, 00:12:53.749 "num_base_bdevs": 2, 00:12:53.749 "num_base_bdevs_discovered": 2, 00:12:53.749 "num_base_bdevs_operational": 2, 00:12:53.749 "base_bdevs_list": [ 00:12:53.749 { 00:12:53.749 "name": "pt1", 00:12:53.749 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:53.749 "is_configured": true, 00:12:53.749 "data_offset": 2048, 00:12:53.749 "data_size": 63488 00:12:53.749 }, 00:12:53.749 { 00:12:53.749 "name": "pt2", 00:12:53.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:53.749 "is_configured": true, 00:12:53.749 "data_offset": 2048, 00:12:53.749 "data_size": 63488 00:12:53.749 } 00:12:53.749 ] 00:12:53.749 }' 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.749 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.314 [2024-11-27 04:34:41.728113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.314 "name": "raid_bdev1", 00:12:54.314 "aliases": [ 00:12:54.314 "651476e7-57c7-47fc-a923-e6c9b675a5b6" 00:12:54.314 ], 00:12:54.314 "product_name": "Raid Volume", 00:12:54.314 "block_size": 512, 00:12:54.314 "num_blocks": 63488, 00:12:54.314 "uuid": "651476e7-57c7-47fc-a923-e6c9b675a5b6", 00:12:54.314 "assigned_rate_limits": { 00:12:54.314 "rw_ios_per_sec": 0, 00:12:54.314 "rw_mbytes_per_sec": 0, 00:12:54.314 "r_mbytes_per_sec": 0, 00:12:54.314 "w_mbytes_per_sec": 0 00:12:54.314 }, 00:12:54.314 "claimed": false, 00:12:54.314 "zoned": false, 00:12:54.314 "supported_io_types": { 00:12:54.314 "read": true, 00:12:54.314 "write": true, 00:12:54.314 "unmap": false, 00:12:54.314 "flush": false, 00:12:54.314 "reset": true, 00:12:54.314 "nvme_admin": false, 00:12:54.314 "nvme_io": false, 00:12:54.314 "nvme_io_md": false, 00:12:54.314 "write_zeroes": true, 00:12:54.314 "zcopy": false, 00:12:54.314 "get_zone_info": false, 00:12:54.314 "zone_management": false, 00:12:54.314 "zone_append": false, 00:12:54.314 "compare": false, 00:12:54.314 "compare_and_write": false, 00:12:54.314 "abort": false, 00:12:54.314 "seek_hole": false, 00:12:54.314 "seek_data": false, 00:12:54.314 "copy": false, 00:12:54.314 "nvme_iov_md": false 00:12:54.314 }, 00:12:54.314 "memory_domains": [ 00:12:54.314 { 00:12:54.314 "dma_device_id": "system", 00:12:54.314 "dma_device_type": 1 00:12:54.314 }, 00:12:54.314 { 00:12:54.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.314 "dma_device_type": 2 00:12:54.314 }, 00:12:54.314 { 00:12:54.314 "dma_device_id": "system", 00:12:54.314 "dma_device_type": 1 00:12:54.314 }, 00:12:54.314 { 00:12:54.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.314 "dma_device_type": 2 00:12:54.314 } 00:12:54.314 ], 00:12:54.314 "driver_specific": { 00:12:54.314 "raid": { 00:12:54.314 "uuid": "651476e7-57c7-47fc-a923-e6c9b675a5b6", 00:12:54.314 "strip_size_kb": 0, 00:12:54.314 "state": "online", 00:12:54.314 "raid_level": "raid1", 00:12:54.314 "superblock": true, 00:12:54.314 "num_base_bdevs": 2, 00:12:54.314 "num_base_bdevs_discovered": 2, 00:12:54.314 "num_base_bdevs_operational": 2, 00:12:54.314 "base_bdevs_list": [ 00:12:54.314 { 00:12:54.314 "name": "pt1", 00:12:54.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:54.314 "is_configured": true, 00:12:54.314 "data_offset": 2048, 00:12:54.314 "data_size": 63488 00:12:54.314 }, 00:12:54.314 { 00:12:54.314 "name": "pt2", 00:12:54.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.314 "is_configured": true, 00:12:54.314 "data_offset": 2048, 00:12:54.314 "data_size": 63488 00:12:54.314 } 00:12:54.314 ] 00:12:54.314 } 00:12:54.314 } 00:12:54.314 }' 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.314 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:54.314 pt2' 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.315 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.573 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.573 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.573 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.573 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:54.573 04:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:54.573 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.573 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.573 [2024-11-27 04:34:41.980181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.573 04:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=651476e7-57c7-47fc-a923-e6c9b675a5b6 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 651476e7-57c7-47fc-a923-e6c9b675a5b6 ']' 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.573 [2024-11-27 04:34:42.031813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:54.573 [2024-11-27 04:34:42.031974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.573 [2024-11-27 04:34:42.032212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.573 [2024-11-27 04:34:42.032397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.573 [2024-11-27 04:34:42.032624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:54.573 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.574 [2024-11-27 04:34:42.155916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:54.574 [2024-11-27 04:34:42.158609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:54.574 [2024-11-27 04:34:42.158831] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:54.574 [2024-11-27 04:34:42.159077] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:54.574 [2024-11-27 04:34:42.159357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:54.574 [2024-11-27 04:34:42.159472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:54.574 request: 00:12:54.574 { 00:12:54.574 "name": "raid_bdev1", 00:12:54.574 "raid_level": "raid1", 00:12:54.574 "base_bdevs": [ 00:12:54.574 "malloc1", 00:12:54.574 "malloc2" 00:12:54.574 ], 00:12:54.574 "superblock": false, 00:12:54.574 "method": "bdev_raid_create", 00:12:54.574 "req_id": 1 00:12:54.574 } 00:12:54.574 Got JSON-RPC error response 00:12:54.574 response: 00:12:54.574 { 00:12:54.574 "code": -17, 00:12:54.574 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:54.574 } 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.574 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.832 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:54.832 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:54.832 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:54.832 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.832 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.832 [2024-11-27 04:34:42.223933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:54.832 [2024-11-27 04:34:42.224155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.832 [2024-11-27 04:34:42.224231] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:54.832 [2024-11-27 04:34:42.224356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.832 [2024-11-27 04:34:42.227376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.832 [2024-11-27 04:34:42.227537] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:54.832 [2024-11-27 04:34:42.227785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:54.832 [2024-11-27 04:34:42.227970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:54.832 pt1 00:12:54.832 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.832 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:54.832 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.832 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.832 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.833 "name": "raid_bdev1", 00:12:54.833 "uuid": "651476e7-57c7-47fc-a923-e6c9b675a5b6", 00:12:54.833 "strip_size_kb": 0, 00:12:54.833 "state": "configuring", 00:12:54.833 "raid_level": "raid1", 00:12:54.833 "superblock": true, 00:12:54.833 "num_base_bdevs": 2, 00:12:54.833 "num_base_bdevs_discovered": 1, 00:12:54.833 "num_base_bdevs_operational": 2, 00:12:54.833 "base_bdevs_list": [ 00:12:54.833 { 00:12:54.833 "name": "pt1", 00:12:54.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:54.833 "is_configured": true, 00:12:54.833 "data_offset": 2048, 00:12:54.833 "data_size": 63488 00:12:54.833 }, 00:12:54.833 { 00:12:54.833 "name": null, 00:12:54.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.833 "is_configured": false, 00:12:54.833 "data_offset": 2048, 00:12:54.833 "data_size": 63488 00:12:54.833 } 00:12:54.833 ] 00:12:54.833 }' 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.833 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.400 [2024-11-27 04:34:42.744412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:55.400 [2024-11-27 04:34:42.744506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.400 [2024-11-27 04:34:42.744539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:55.400 [2024-11-27 04:34:42.744558] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.400 [2024-11-27 04:34:42.745199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.400 [2024-11-27 04:34:42.745239] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:55.400 [2024-11-27 04:34:42.745345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:55.400 [2024-11-27 04:34:42.745387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:55.400 [2024-11-27 04:34:42.745550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:55.400 [2024-11-27 04:34:42.745571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:55.400 [2024-11-27 04:34:42.745920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:55.400 [2024-11-27 04:34:42.746148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:55.400 [2024-11-27 04:34:42.746163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:55.400 [2024-11-27 04:34:42.746336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.400 pt2 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.400 "name": "raid_bdev1", 00:12:55.400 "uuid": "651476e7-57c7-47fc-a923-e6c9b675a5b6", 00:12:55.400 "strip_size_kb": 0, 00:12:55.400 "state": "online", 00:12:55.400 "raid_level": "raid1", 00:12:55.400 "superblock": true, 00:12:55.400 "num_base_bdevs": 2, 00:12:55.400 "num_base_bdevs_discovered": 2, 00:12:55.400 "num_base_bdevs_operational": 2, 00:12:55.400 "base_bdevs_list": [ 00:12:55.400 { 00:12:55.400 "name": "pt1", 00:12:55.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.400 "is_configured": true, 00:12:55.400 "data_offset": 2048, 00:12:55.400 "data_size": 63488 00:12:55.400 }, 00:12:55.400 { 00:12:55.400 "name": "pt2", 00:12:55.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.400 "is_configured": true, 00:12:55.400 "data_offset": 2048, 00:12:55.400 "data_size": 63488 00:12:55.400 } 00:12:55.400 ] 00:12:55.400 }' 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.400 04:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.659 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:55.659 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:55.659 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.659 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.659 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.659 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.659 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.659 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.659 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.659 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.659 [2024-11-27 04:34:43.268859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.917 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.917 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.917 "name": "raid_bdev1", 00:12:55.917 "aliases": [ 00:12:55.917 "651476e7-57c7-47fc-a923-e6c9b675a5b6" 00:12:55.917 ], 00:12:55.917 "product_name": "Raid Volume", 00:12:55.917 "block_size": 512, 00:12:55.917 "num_blocks": 63488, 00:12:55.917 "uuid": "651476e7-57c7-47fc-a923-e6c9b675a5b6", 00:12:55.917 "assigned_rate_limits": { 00:12:55.917 "rw_ios_per_sec": 0, 00:12:55.917 "rw_mbytes_per_sec": 0, 00:12:55.917 "r_mbytes_per_sec": 0, 00:12:55.917 "w_mbytes_per_sec": 0 00:12:55.917 }, 00:12:55.917 "claimed": false, 00:12:55.917 "zoned": false, 00:12:55.917 "supported_io_types": { 00:12:55.917 "read": true, 00:12:55.917 "write": true, 00:12:55.917 "unmap": false, 00:12:55.917 "flush": false, 00:12:55.917 "reset": true, 00:12:55.917 "nvme_admin": false, 00:12:55.917 "nvme_io": false, 00:12:55.917 "nvme_io_md": false, 00:12:55.917 "write_zeroes": true, 00:12:55.917 "zcopy": false, 00:12:55.917 "get_zone_info": false, 00:12:55.917 "zone_management": false, 00:12:55.917 "zone_append": false, 00:12:55.917 "compare": false, 00:12:55.917 "compare_and_write": false, 00:12:55.917 "abort": false, 00:12:55.917 "seek_hole": false, 00:12:55.917 "seek_data": false, 00:12:55.917 "copy": false, 00:12:55.917 "nvme_iov_md": false 00:12:55.917 }, 00:12:55.917 "memory_domains": [ 00:12:55.917 { 00:12:55.917 "dma_device_id": "system", 00:12:55.917 "dma_device_type": 1 00:12:55.917 }, 00:12:55.917 { 00:12:55.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.917 "dma_device_type": 2 00:12:55.917 }, 00:12:55.917 { 00:12:55.917 "dma_device_id": "system", 00:12:55.917 "dma_device_type": 1 00:12:55.917 }, 00:12:55.917 { 00:12:55.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.917 "dma_device_type": 2 00:12:55.917 } 00:12:55.917 ], 00:12:55.917 "driver_specific": { 00:12:55.917 "raid": { 00:12:55.917 "uuid": "651476e7-57c7-47fc-a923-e6c9b675a5b6", 00:12:55.917 "strip_size_kb": 0, 00:12:55.917 "state": "online", 00:12:55.917 "raid_level": "raid1", 00:12:55.917 "superblock": true, 00:12:55.917 "num_base_bdevs": 2, 00:12:55.917 "num_base_bdevs_discovered": 2, 00:12:55.917 "num_base_bdevs_operational": 2, 00:12:55.917 "base_bdevs_list": [ 00:12:55.917 { 00:12:55.917 "name": "pt1", 00:12:55.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.917 "is_configured": true, 00:12:55.917 "data_offset": 2048, 00:12:55.917 "data_size": 63488 00:12:55.917 }, 00:12:55.917 { 00:12:55.917 "name": "pt2", 00:12:55.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.917 "is_configured": true, 00:12:55.917 "data_offset": 2048, 00:12:55.917 "data_size": 63488 00:12:55.917 } 00:12:55.917 ] 00:12:55.917 } 00:12:55.917 } 00:12:55.917 }' 00:12:55.917 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:55.918 pt2' 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.918 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.176 [2024-11-27 04:34:43.536940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 651476e7-57c7-47fc-a923-e6c9b675a5b6 '!=' 651476e7-57c7-47fc-a923-e6c9b675a5b6 ']' 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.176 [2024-11-27 04:34:43.580693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.176 "name": "raid_bdev1", 00:12:56.176 "uuid": "651476e7-57c7-47fc-a923-e6c9b675a5b6", 00:12:56.176 "strip_size_kb": 0, 00:12:56.176 "state": "online", 00:12:56.176 "raid_level": "raid1", 00:12:56.176 "superblock": true, 00:12:56.176 "num_base_bdevs": 2, 00:12:56.176 "num_base_bdevs_discovered": 1, 00:12:56.176 "num_base_bdevs_operational": 1, 00:12:56.176 "base_bdevs_list": [ 00:12:56.176 { 00:12:56.176 "name": null, 00:12:56.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.176 "is_configured": false, 00:12:56.176 "data_offset": 0, 00:12:56.176 "data_size": 63488 00:12:56.176 }, 00:12:56.176 { 00:12:56.176 "name": "pt2", 00:12:56.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.176 "is_configured": true, 00:12:56.176 "data_offset": 2048, 00:12:56.176 "data_size": 63488 00:12:56.176 } 00:12:56.176 ] 00:12:56.176 }' 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.176 04:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.743 [2024-11-27 04:34:44.108785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.743 [2024-11-27 04:34:44.108822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.743 [2024-11-27 04:34:44.108928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.743 [2024-11-27 04:34:44.108997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.743 [2024-11-27 04:34:44.109016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.743 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.743 [2024-11-27 04:34:44.180754] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.743 [2024-11-27 04:34:44.180852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.743 [2024-11-27 04:34:44.180880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:56.743 [2024-11-27 04:34:44.180897] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.743 [2024-11-27 04:34:44.183953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.743 [2024-11-27 04:34:44.184146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.743 [2024-11-27 04:34:44.184272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:56.743 [2024-11-27 04:34:44.184360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.743 [2024-11-27 04:34:44.184494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:56.743 [2024-11-27 04:34:44.184517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:56.743 [2024-11-27 04:34:44.184830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:56.743 [2024-11-27 04:34:44.185054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:56.744 [2024-11-27 04:34:44.185079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:56.744 [2024-11-27 04:34:44.185308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.744 pt2 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.744 "name": "raid_bdev1", 00:12:56.744 "uuid": "651476e7-57c7-47fc-a923-e6c9b675a5b6", 00:12:56.744 "strip_size_kb": 0, 00:12:56.744 "state": "online", 00:12:56.744 "raid_level": "raid1", 00:12:56.744 "superblock": true, 00:12:56.744 "num_base_bdevs": 2, 00:12:56.744 "num_base_bdevs_discovered": 1, 00:12:56.744 "num_base_bdevs_operational": 1, 00:12:56.744 "base_bdevs_list": [ 00:12:56.744 { 00:12:56.744 "name": null, 00:12:56.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.744 "is_configured": false, 00:12:56.744 "data_offset": 2048, 00:12:56.744 "data_size": 63488 00:12:56.744 }, 00:12:56.744 { 00:12:56.744 "name": "pt2", 00:12:56.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.744 "is_configured": true, 00:12:56.744 "data_offset": 2048, 00:12:56.744 "data_size": 63488 00:12:56.744 } 00:12:56.744 ] 00:12:56.744 }' 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.744 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.312 [2024-11-27 04:34:44.689367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.312 [2024-11-27 04:34:44.689407] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.312 [2024-11-27 04:34:44.689506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.312 [2024-11-27 04:34:44.689592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.312 [2024-11-27 04:34:44.689608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.312 [2024-11-27 04:34:44.769421] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:57.312 [2024-11-27 04:34:44.769502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.312 [2024-11-27 04:34:44.769536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:57.312 [2024-11-27 04:34:44.769554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.312 [2024-11-27 04:34:44.772621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.312 [2024-11-27 04:34:44.772672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:57.312 [2024-11-27 04:34:44.772825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:57.312 [2024-11-27 04:34:44.772889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:57.312 [2024-11-27 04:34:44.773087] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:57.312 [2024-11-27 04:34:44.773106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.312 [2024-11-27 04:34:44.773143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:57.312 [2024-11-27 04:34:44.773210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.312 [2024-11-27 04:34:44.773331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:57.312 [2024-11-27 04:34:44.773347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:57.312 [2024-11-27 04:34:44.773694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:57.312 [2024-11-27 04:34:44.773950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:57.312 [2024-11-27 04:34:44.773974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:57.312 [2024-11-27 04:34:44.774216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.312 pt1 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.312 "name": "raid_bdev1", 00:12:57.312 "uuid": "651476e7-57c7-47fc-a923-e6c9b675a5b6", 00:12:57.312 "strip_size_kb": 0, 00:12:57.312 "state": "online", 00:12:57.312 "raid_level": "raid1", 00:12:57.312 "superblock": true, 00:12:57.312 "num_base_bdevs": 2, 00:12:57.312 "num_base_bdevs_discovered": 1, 00:12:57.312 "num_base_bdevs_operational": 1, 00:12:57.312 "base_bdevs_list": [ 00:12:57.312 { 00:12:57.312 "name": null, 00:12:57.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.312 "is_configured": false, 00:12:57.312 "data_offset": 2048, 00:12:57.312 "data_size": 63488 00:12:57.312 }, 00:12:57.312 { 00:12:57.312 "name": "pt2", 00:12:57.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.312 "is_configured": true, 00:12:57.312 "data_offset": 2048, 00:12:57.312 "data_size": 63488 00:12:57.312 } 00:12:57.312 ] 00:12:57.312 }' 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.312 04:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.879 [2024-11-27 04:34:45.349892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 651476e7-57c7-47fc-a923-e6c9b675a5b6 '!=' 651476e7-57c7-47fc-a923-e6c9b675a5b6 ']' 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63313 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63313 ']' 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63313 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63313 00:12:57.879 killing process with pid 63313 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63313' 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63313 00:12:57.879 [2024-11-27 04:34:45.433621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.879 04:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63313 00:12:57.879 [2024-11-27 04:34:45.433750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.879 [2024-11-27 04:34:45.433841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.879 [2024-11-27 04:34:45.433867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:58.138 [2024-11-27 04:34:45.619904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.072 04:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:59.072 00:12:59.072 real 0m6.660s 00:12:59.072 user 0m10.525s 00:12:59.072 sys 0m0.955s 00:12:59.072 04:34:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.072 ************************************ 00:12:59.072 END TEST raid_superblock_test 00:12:59.073 ************************************ 00:12:59.073 04:34:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.332 04:34:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:12:59.332 04:34:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:59.332 04:34:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.332 04:34:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.332 ************************************ 00:12:59.332 START TEST raid_read_error_test 00:12:59.332 ************************************ 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CPvbZUV2ar 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63649 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63649 00:12:59.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63649 ']' 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.332 04:34:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.332 [2024-11-27 04:34:46.857536] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:59.332 [2024-11-27 04:34:46.858036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63649 ] 00:12:59.591 [2024-11-27 04:34:47.046838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.591 [2024-11-27 04:34:47.201683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.858 [2024-11-27 04:34:47.405726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.858 [2024-11-27 04:34:47.405816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 BaseBdev1_malloc 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 true 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 [2024-11-27 04:34:47.915128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:00.439 [2024-11-27 04:34:47.915350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.439 [2024-11-27 04:34:47.915394] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:00.439 [2024-11-27 04:34:47.915414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.439 [2024-11-27 04:34:47.918259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.439 [2024-11-27 04:34:47.918312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.439 BaseBdev1 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 BaseBdev2_malloc 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 true 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 [2024-11-27 04:34:47.975449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:00.439 [2024-11-27 04:34:47.975676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.439 [2024-11-27 04:34:47.975714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:00.439 [2024-11-27 04:34:47.975734] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.439 [2024-11-27 04:34:47.978537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.439 [2024-11-27 04:34:47.978590] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:00.439 BaseBdev2 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 [2024-11-27 04:34:47.987598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.439 [2024-11-27 04:34:47.990108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.439 [2024-11-27 04:34:47.990534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:00.439 [2024-11-27 04:34:47.990569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.439 [2024-11-27 04:34:47.990901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:00.439 [2024-11-27 04:34:47.991146] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:00.439 [2024-11-27 04:34:47.991171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:00.439 [2024-11-27 04:34:47.991374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 04:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 04:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.439 04:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.439 "name": "raid_bdev1", 00:13:00.439 "uuid": "118e0104-f077-4dc1-9c0f-0424f5f8584e", 00:13:00.439 "strip_size_kb": 0, 00:13:00.439 "state": "online", 00:13:00.440 "raid_level": "raid1", 00:13:00.440 "superblock": true, 00:13:00.440 "num_base_bdevs": 2, 00:13:00.440 "num_base_bdevs_discovered": 2, 00:13:00.440 "num_base_bdevs_operational": 2, 00:13:00.440 "base_bdevs_list": [ 00:13:00.440 { 00:13:00.440 "name": "BaseBdev1", 00:13:00.440 "uuid": "03d10eb4-edf4-509b-9972-32cbbc179a52", 00:13:00.440 "is_configured": true, 00:13:00.440 "data_offset": 2048, 00:13:00.440 "data_size": 63488 00:13:00.440 }, 00:13:00.440 { 00:13:00.440 "name": "BaseBdev2", 00:13:00.440 "uuid": "80cbbe7b-479d-510d-9d39-61e327052549", 00:13:00.440 "is_configured": true, 00:13:00.440 "data_offset": 2048, 00:13:00.440 "data_size": 63488 00:13:00.440 } 00:13:00.440 ] 00:13:00.440 }' 00:13:00.440 04:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.440 04:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 04:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:01.005 04:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:01.006 [2024-11-27 04:34:48.609159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.941 "name": "raid_bdev1", 00:13:01.941 "uuid": "118e0104-f077-4dc1-9c0f-0424f5f8584e", 00:13:01.941 "strip_size_kb": 0, 00:13:01.941 "state": "online", 00:13:01.941 "raid_level": "raid1", 00:13:01.941 "superblock": true, 00:13:01.941 "num_base_bdevs": 2, 00:13:01.941 "num_base_bdevs_discovered": 2, 00:13:01.941 "num_base_bdevs_operational": 2, 00:13:01.941 "base_bdevs_list": [ 00:13:01.941 { 00:13:01.941 "name": "BaseBdev1", 00:13:01.941 "uuid": "03d10eb4-edf4-509b-9972-32cbbc179a52", 00:13:01.941 "is_configured": true, 00:13:01.941 "data_offset": 2048, 00:13:01.941 "data_size": 63488 00:13:01.941 }, 00:13:01.941 { 00:13:01.941 "name": "BaseBdev2", 00:13:01.941 "uuid": "80cbbe7b-479d-510d-9d39-61e327052549", 00:13:01.941 "is_configured": true, 00:13:01.941 "data_offset": 2048, 00:13:01.941 "data_size": 63488 00:13:01.941 } 00:13:01.941 ] 00:13:01.941 }' 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.941 04:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.509 04:34:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.509 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.509 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.509 [2024-11-27 04:34:50.032009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.509 [2024-11-27 04:34:50.032055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.509 { 00:13:02.509 "results": [ 00:13:02.509 { 00:13:02.509 "job": "raid_bdev1", 00:13:02.509 "core_mask": "0x1", 00:13:02.509 "workload": "randrw", 00:13:02.509 "percentage": 50, 00:13:02.509 "status": "finished", 00:13:02.509 "queue_depth": 1, 00:13:02.509 "io_size": 131072, 00:13:02.509 "runtime": 1.420329, 00:13:02.509 "iops": 11945.119757464643, 00:13:02.509 "mibps": 1493.1399696830804, 00:13:02.509 "io_failed": 0, 00:13:02.509 "io_timeout": 0, 00:13:02.509 "avg_latency_us": 79.48826337166311, 00:13:02.509 "min_latency_us": 45.14909090909091, 00:13:02.509 "max_latency_us": 1817.1345454545456 00:13:02.509 } 00:13:02.509 ], 00:13:02.509 "core_count": 1 00:13:02.509 } 00:13:02.509 [2024-11-27 04:34:50.035443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.509 [2024-11-27 04:34:50.035510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.509 [2024-11-27 04:34:50.035624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.509 [2024-11-27 04:34:50.035647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:02.509 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.509 04:34:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63649 00:13:02.509 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63649 ']' 00:13:02.509 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63649 00:13:02.510 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:02.510 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.510 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63649 00:13:02.510 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.510 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.510 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63649' 00:13:02.510 killing process with pid 63649 00:13:02.510 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63649 00:13:02.510 04:34:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63649 00:13:02.510 [2024-11-27 04:34:50.085157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.766 [2024-11-27 04:34:50.204915] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.698 04:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CPvbZUV2ar 00:13:03.698 04:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:03.698 04:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:03.698 04:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:03.698 04:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:03.698 04:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:03.698 04:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:03.698 04:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:03.698 ************************************ 00:13:03.698 END TEST raid_read_error_test 00:13:03.698 ************************************ 00:13:03.698 00:13:03.698 real 0m4.580s 00:13:03.698 user 0m5.753s 00:13:03.698 sys 0m0.576s 00:13:03.698 04:34:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.698 04:34:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.956 04:34:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:13:03.956 04:34:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:03.956 04:34:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.956 04:34:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.956 ************************************ 00:13:03.956 START TEST raid_write_error_test 00:13:03.956 ************************************ 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:03.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4IqwE309Ku 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63795 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63795 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63795 ']' 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.956 04:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.956 [2024-11-27 04:34:51.480592] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:03.956 [2024-11-27 04:34:51.480812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63795 ] 00:13:04.214 [2024-11-27 04:34:51.668171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.214 [2024-11-27 04:34:51.820927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.475 [2024-11-27 04:34:52.033162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.475 [2024-11-27 04:34:52.033428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.040 BaseBdev1_malloc 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.040 true 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.040 [2024-11-27 04:34:52.513264] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:05.040 [2024-11-27 04:34:52.513334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.040 [2024-11-27 04:34:52.513363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:05.040 [2024-11-27 04:34:52.513381] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.040 [2024-11-27 04:34:52.516134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.040 [2024-11-27 04:34:52.516184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:05.040 BaseBdev1 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.040 BaseBdev2_malloc 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.040 true 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.040 [2024-11-27 04:34:52.573174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:05.040 [2024-11-27 04:34:52.573240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.040 [2024-11-27 04:34:52.573265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:05.040 [2024-11-27 04:34:52.573282] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.040 [2024-11-27 04:34:52.576095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.040 [2024-11-27 04:34:52.576158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:05.040 BaseBdev2 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.040 [2024-11-27 04:34:52.581283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.040 [2024-11-27 04:34:52.583963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.040 [2024-11-27 04:34:52.584223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:05.040 [2024-11-27 04:34:52.584248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:05.040 [2024-11-27 04:34:52.584564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:05.040 [2024-11-27 04:34:52.584778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:05.040 [2024-11-27 04:34:52.584811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:05.040 [2024-11-27 04:34:52.585257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.040 "name": "raid_bdev1", 00:13:05.040 "uuid": "76ecfbc7-4f14-4115-87d4-2989717caf4a", 00:13:05.040 "strip_size_kb": 0, 00:13:05.040 "state": "online", 00:13:05.040 "raid_level": "raid1", 00:13:05.040 "superblock": true, 00:13:05.040 "num_base_bdevs": 2, 00:13:05.040 "num_base_bdevs_discovered": 2, 00:13:05.040 "num_base_bdevs_operational": 2, 00:13:05.040 "base_bdevs_list": [ 00:13:05.040 { 00:13:05.040 "name": "BaseBdev1", 00:13:05.040 "uuid": "47340209-0f1a-56f8-8a72-d26577643cd4", 00:13:05.040 "is_configured": true, 00:13:05.040 "data_offset": 2048, 00:13:05.040 "data_size": 63488 00:13:05.040 }, 00:13:05.040 { 00:13:05.040 "name": "BaseBdev2", 00:13:05.040 "uuid": "66ff6a16-f6a4-57a4-aab5-8ee2b41c3e65", 00:13:05.040 "is_configured": true, 00:13:05.040 "data_offset": 2048, 00:13:05.040 "data_size": 63488 00:13:05.040 } 00:13:05.040 ] 00:13:05.040 }' 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.040 04:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.605 04:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:05.605 04:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.862 [2024-11-27 04:34:53.242910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.795 [2024-11-27 04:34:54.131163] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:06.795 [2024-11-27 04:34:54.131232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.795 [2024-11-27 04:34:54.131463] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.795 "name": "raid_bdev1", 00:13:06.795 "uuid": "76ecfbc7-4f14-4115-87d4-2989717caf4a", 00:13:06.795 "strip_size_kb": 0, 00:13:06.795 "state": "online", 00:13:06.795 "raid_level": "raid1", 00:13:06.795 "superblock": true, 00:13:06.795 "num_base_bdevs": 2, 00:13:06.795 "num_base_bdevs_discovered": 1, 00:13:06.795 "num_base_bdevs_operational": 1, 00:13:06.795 "base_bdevs_list": [ 00:13:06.795 { 00:13:06.795 "name": null, 00:13:06.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.795 "is_configured": false, 00:13:06.795 "data_offset": 0, 00:13:06.795 "data_size": 63488 00:13:06.795 }, 00:13:06.795 { 00:13:06.795 "name": "BaseBdev2", 00:13:06.795 "uuid": "66ff6a16-f6a4-57a4-aab5-8ee2b41c3e65", 00:13:06.795 "is_configured": true, 00:13:06.795 "data_offset": 2048, 00:13:06.795 "data_size": 63488 00:13:06.795 } 00:13:06.795 ] 00:13:06.795 }' 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.795 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.053 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.053 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.053 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.053 [2024-11-27 04:34:54.658093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.053 [2024-11-27 04:34:54.658250] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.053 [2024-11-27 04:34:54.661732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.053 [2024-11-27 04:34:54.661923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.053 [2024-11-27 04:34:54.662182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:13:07.053 "results": [ 00:13:07.053 { 00:13:07.053 "job": "raid_bdev1", 00:13:07.053 "core_mask": "0x1", 00:13:07.053 "workload": "randrw", 00:13:07.053 "percentage": 50, 00:13:07.053 "status": "finished", 00:13:07.053 "queue_depth": 1, 00:13:07.053 "io_size": 131072, 00:13:07.053 "runtime": 1.413097, 00:13:07.053 "iops": 14855.314249481811, 00:13:07.053 "mibps": 1856.9142811852264, 00:13:07.053 "io_failed": 0, 00:13:07.053 "io_timeout": 0, 00:13:07.053 "avg_latency_us": 63.0369844789357, 00:13:07.053 "min_latency_us": 41.89090909090909, 00:13:07.053 "max_latency_us": 1802.24 00:13:07.053 } 00:13:07.053 ], 00:13:07.053 "core_count": 1 00:13:07.053 } 00:13:07.053 ee all in destruct 00:13:07.053 [2024-11-27 04:34:54.662333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:07.053 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.053 04:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63795 00:13:07.053 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63795 ']' 00:13:07.053 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63795 00:13:07.053 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:07.053 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.311 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63795 00:13:07.311 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.311 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.311 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63795' 00:13:07.311 killing process with pid 63795 00:13:07.311 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63795 00:13:07.311 [2024-11-27 04:34:54.700954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.311 04:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63795 00:13:07.311 [2024-11-27 04:34:54.823256] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.687 04:34:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4IqwE309Ku 00:13:08.687 04:34:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:08.687 04:34:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:08.687 04:34:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:08.687 04:34:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:08.687 04:34:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.687 04:34:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:08.687 04:34:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:08.687 00:13:08.687 real 0m4.600s 00:13:08.687 user 0m5.786s 00:13:08.687 sys 0m0.542s 00:13:08.687 ************************************ 00:13:08.687 END TEST raid_write_error_test 00:13:08.687 04:34:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.687 04:34:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.687 ************************************ 00:13:08.687 04:34:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:08.687 04:34:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:08.687 04:34:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:13:08.687 04:34:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:08.687 04:34:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.687 04:34:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.687 ************************************ 00:13:08.687 START TEST raid_state_function_test 00:13:08.687 ************************************ 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:08.687 Process raid pid: 63938 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63938 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63938' 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63938 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63938 ']' 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.687 04:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.687 [2024-11-27 04:34:56.106992] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:08.687 [2024-11-27 04:34:56.107344] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.687 [2024-11-27 04:34:56.280908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.945 [2024-11-27 04:34:56.414847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.203 [2024-11-27 04:34:56.623426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.203 [2024-11-27 04:34:56.623485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.461 [2024-11-27 04:34:57.045199] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.461 [2024-11-27 04:34:57.045265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.461 [2024-11-27 04:34:57.045283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.461 [2024-11-27 04:34:57.045299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.461 [2024-11-27 04:34:57.045308] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.461 [2024-11-27 04:34:57.045322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.461 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.719 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.719 "name": "Existed_Raid", 00:13:09.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.719 "strip_size_kb": 64, 00:13:09.719 "state": "configuring", 00:13:09.719 "raid_level": "raid0", 00:13:09.719 "superblock": false, 00:13:09.719 "num_base_bdevs": 3, 00:13:09.719 "num_base_bdevs_discovered": 0, 00:13:09.719 "num_base_bdevs_operational": 3, 00:13:09.719 "base_bdevs_list": [ 00:13:09.719 { 00:13:09.719 "name": "BaseBdev1", 00:13:09.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.719 "is_configured": false, 00:13:09.719 "data_offset": 0, 00:13:09.719 "data_size": 0 00:13:09.719 }, 00:13:09.719 { 00:13:09.719 "name": "BaseBdev2", 00:13:09.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.719 "is_configured": false, 00:13:09.719 "data_offset": 0, 00:13:09.719 "data_size": 0 00:13:09.719 }, 00:13:09.719 { 00:13:09.719 "name": "BaseBdev3", 00:13:09.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.719 "is_configured": false, 00:13:09.719 "data_offset": 0, 00:13:09.719 "data_size": 0 00:13:09.719 } 00:13:09.719 ] 00:13:09.719 }' 00:13:09.719 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.719 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.977 [2024-11-27 04:34:57.573272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:09.977 [2024-11-27 04:34:57.573446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.977 [2024-11-27 04:34:57.581261] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.977 [2024-11-27 04:34:57.581317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.977 [2024-11-27 04:34:57.581333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.977 [2024-11-27 04:34:57.581348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.977 [2024-11-27 04:34:57.581357] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.977 [2024-11-27 04:34:57.581370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.977 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.235 [2024-11-27 04:34:57.625891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.235 BaseBdev1 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.235 [ 00:13:10.235 { 00:13:10.235 "name": "BaseBdev1", 00:13:10.235 "aliases": [ 00:13:10.235 "9dbde0f3-fead-4cea-a455-322fffcfc5c0" 00:13:10.235 ], 00:13:10.235 "product_name": "Malloc disk", 00:13:10.235 "block_size": 512, 00:13:10.235 "num_blocks": 65536, 00:13:10.235 "uuid": "9dbde0f3-fead-4cea-a455-322fffcfc5c0", 00:13:10.235 "assigned_rate_limits": { 00:13:10.235 "rw_ios_per_sec": 0, 00:13:10.235 "rw_mbytes_per_sec": 0, 00:13:10.235 "r_mbytes_per_sec": 0, 00:13:10.235 "w_mbytes_per_sec": 0 00:13:10.235 }, 00:13:10.235 "claimed": true, 00:13:10.235 "claim_type": "exclusive_write", 00:13:10.235 "zoned": false, 00:13:10.235 "supported_io_types": { 00:13:10.235 "read": true, 00:13:10.235 "write": true, 00:13:10.235 "unmap": true, 00:13:10.235 "flush": true, 00:13:10.235 "reset": true, 00:13:10.235 "nvme_admin": false, 00:13:10.235 "nvme_io": false, 00:13:10.235 "nvme_io_md": false, 00:13:10.235 "write_zeroes": true, 00:13:10.235 "zcopy": true, 00:13:10.235 "get_zone_info": false, 00:13:10.235 "zone_management": false, 00:13:10.235 "zone_append": false, 00:13:10.235 "compare": false, 00:13:10.235 "compare_and_write": false, 00:13:10.235 "abort": true, 00:13:10.235 "seek_hole": false, 00:13:10.235 "seek_data": false, 00:13:10.235 "copy": true, 00:13:10.235 "nvme_iov_md": false 00:13:10.235 }, 00:13:10.235 "memory_domains": [ 00:13:10.235 { 00:13:10.235 "dma_device_id": "system", 00:13:10.235 "dma_device_type": 1 00:13:10.235 }, 00:13:10.235 { 00:13:10.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.235 "dma_device_type": 2 00:13:10.235 } 00:13:10.235 ], 00:13:10.235 "driver_specific": {} 00:13:10.235 } 00:13:10.235 ] 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.235 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.236 "name": "Existed_Raid", 00:13:10.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.236 "strip_size_kb": 64, 00:13:10.236 "state": "configuring", 00:13:10.236 "raid_level": "raid0", 00:13:10.236 "superblock": false, 00:13:10.236 "num_base_bdevs": 3, 00:13:10.236 "num_base_bdevs_discovered": 1, 00:13:10.236 "num_base_bdevs_operational": 3, 00:13:10.236 "base_bdevs_list": [ 00:13:10.236 { 00:13:10.236 "name": "BaseBdev1", 00:13:10.236 "uuid": "9dbde0f3-fead-4cea-a455-322fffcfc5c0", 00:13:10.236 "is_configured": true, 00:13:10.236 "data_offset": 0, 00:13:10.236 "data_size": 65536 00:13:10.236 }, 00:13:10.236 { 00:13:10.236 "name": "BaseBdev2", 00:13:10.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.236 "is_configured": false, 00:13:10.236 "data_offset": 0, 00:13:10.236 "data_size": 0 00:13:10.236 }, 00:13:10.236 { 00:13:10.236 "name": "BaseBdev3", 00:13:10.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.236 "is_configured": false, 00:13:10.236 "data_offset": 0, 00:13:10.236 "data_size": 0 00:13:10.236 } 00:13:10.236 ] 00:13:10.236 }' 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.236 04:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.802 [2024-11-27 04:34:58.174113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.802 [2024-11-27 04:34:58.174176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.802 [2024-11-27 04:34:58.182138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.802 [2024-11-27 04:34:58.184616] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.802 [2024-11-27 04:34:58.184671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.802 [2024-11-27 04:34:58.184687] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.802 [2024-11-27 04:34:58.184702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.802 "name": "Existed_Raid", 00:13:10.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.802 "strip_size_kb": 64, 00:13:10.802 "state": "configuring", 00:13:10.802 "raid_level": "raid0", 00:13:10.802 "superblock": false, 00:13:10.802 "num_base_bdevs": 3, 00:13:10.802 "num_base_bdevs_discovered": 1, 00:13:10.802 "num_base_bdevs_operational": 3, 00:13:10.802 "base_bdevs_list": [ 00:13:10.802 { 00:13:10.802 "name": "BaseBdev1", 00:13:10.802 "uuid": "9dbde0f3-fead-4cea-a455-322fffcfc5c0", 00:13:10.802 "is_configured": true, 00:13:10.802 "data_offset": 0, 00:13:10.802 "data_size": 65536 00:13:10.802 }, 00:13:10.802 { 00:13:10.802 "name": "BaseBdev2", 00:13:10.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.802 "is_configured": false, 00:13:10.802 "data_offset": 0, 00:13:10.802 "data_size": 0 00:13:10.802 }, 00:13:10.802 { 00:13:10.802 "name": "BaseBdev3", 00:13:10.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.802 "is_configured": false, 00:13:10.802 "data_offset": 0, 00:13:10.802 "data_size": 0 00:13:10.802 } 00:13:10.802 ] 00:13:10.802 }' 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.802 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.370 [2024-11-27 04:34:58.732144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.370 BaseBdev2 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.370 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.370 [ 00:13:11.370 { 00:13:11.370 "name": "BaseBdev2", 00:13:11.370 "aliases": [ 00:13:11.370 "08e61611-5436-44a6-9e0c-1ee53b31a2ce" 00:13:11.370 ], 00:13:11.370 "product_name": "Malloc disk", 00:13:11.370 "block_size": 512, 00:13:11.370 "num_blocks": 65536, 00:13:11.370 "uuid": "08e61611-5436-44a6-9e0c-1ee53b31a2ce", 00:13:11.370 "assigned_rate_limits": { 00:13:11.370 "rw_ios_per_sec": 0, 00:13:11.370 "rw_mbytes_per_sec": 0, 00:13:11.370 "r_mbytes_per_sec": 0, 00:13:11.370 "w_mbytes_per_sec": 0 00:13:11.370 }, 00:13:11.370 "claimed": true, 00:13:11.370 "claim_type": "exclusive_write", 00:13:11.370 "zoned": false, 00:13:11.370 "supported_io_types": { 00:13:11.370 "read": true, 00:13:11.370 "write": true, 00:13:11.370 "unmap": true, 00:13:11.370 "flush": true, 00:13:11.370 "reset": true, 00:13:11.370 "nvme_admin": false, 00:13:11.370 "nvme_io": false, 00:13:11.370 "nvme_io_md": false, 00:13:11.370 "write_zeroes": true, 00:13:11.370 "zcopy": true, 00:13:11.370 "get_zone_info": false, 00:13:11.370 "zone_management": false, 00:13:11.370 "zone_append": false, 00:13:11.370 "compare": false, 00:13:11.370 "compare_and_write": false, 00:13:11.370 "abort": true, 00:13:11.370 "seek_hole": false, 00:13:11.370 "seek_data": false, 00:13:11.370 "copy": true, 00:13:11.370 "nvme_iov_md": false 00:13:11.370 }, 00:13:11.370 "memory_domains": [ 00:13:11.370 { 00:13:11.370 "dma_device_id": "system", 00:13:11.370 "dma_device_type": 1 00:13:11.371 }, 00:13:11.371 { 00:13:11.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.371 "dma_device_type": 2 00:13:11.371 } 00:13:11.371 ], 00:13:11.371 "driver_specific": {} 00:13:11.371 } 00:13:11.371 ] 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.371 "name": "Existed_Raid", 00:13:11.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.371 "strip_size_kb": 64, 00:13:11.371 "state": "configuring", 00:13:11.371 "raid_level": "raid0", 00:13:11.371 "superblock": false, 00:13:11.371 "num_base_bdevs": 3, 00:13:11.371 "num_base_bdevs_discovered": 2, 00:13:11.371 "num_base_bdevs_operational": 3, 00:13:11.371 "base_bdevs_list": [ 00:13:11.371 { 00:13:11.371 "name": "BaseBdev1", 00:13:11.371 "uuid": "9dbde0f3-fead-4cea-a455-322fffcfc5c0", 00:13:11.371 "is_configured": true, 00:13:11.371 "data_offset": 0, 00:13:11.371 "data_size": 65536 00:13:11.371 }, 00:13:11.371 { 00:13:11.371 "name": "BaseBdev2", 00:13:11.371 "uuid": "08e61611-5436-44a6-9e0c-1ee53b31a2ce", 00:13:11.371 "is_configured": true, 00:13:11.371 "data_offset": 0, 00:13:11.371 "data_size": 65536 00:13:11.371 }, 00:13:11.371 { 00:13:11.371 "name": "BaseBdev3", 00:13:11.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.371 "is_configured": false, 00:13:11.371 "data_offset": 0, 00:13:11.371 "data_size": 0 00:13:11.371 } 00:13:11.371 ] 00:13:11.371 }' 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.371 04:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.936 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:11.936 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.936 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.936 [2024-11-27 04:34:59.317849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.936 [2024-11-27 04:34:59.317903] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:11.936 [2024-11-27 04:34:59.317924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:11.936 [2024-11-27 04:34:59.318269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:11.936 [2024-11-27 04:34:59.318492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:11.937 [2024-11-27 04:34:59.318509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:11.937 [2024-11-27 04:34:59.318855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.937 BaseBdev3 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.937 [ 00:13:11.937 { 00:13:11.937 "name": "BaseBdev3", 00:13:11.937 "aliases": [ 00:13:11.937 "40890da6-c4a1-4436-997d-1829577968a2" 00:13:11.937 ], 00:13:11.937 "product_name": "Malloc disk", 00:13:11.937 "block_size": 512, 00:13:11.937 "num_blocks": 65536, 00:13:11.937 "uuid": "40890da6-c4a1-4436-997d-1829577968a2", 00:13:11.937 "assigned_rate_limits": { 00:13:11.937 "rw_ios_per_sec": 0, 00:13:11.937 "rw_mbytes_per_sec": 0, 00:13:11.937 "r_mbytes_per_sec": 0, 00:13:11.937 "w_mbytes_per_sec": 0 00:13:11.937 }, 00:13:11.937 "claimed": true, 00:13:11.937 "claim_type": "exclusive_write", 00:13:11.937 "zoned": false, 00:13:11.937 "supported_io_types": { 00:13:11.937 "read": true, 00:13:11.937 "write": true, 00:13:11.937 "unmap": true, 00:13:11.937 "flush": true, 00:13:11.937 "reset": true, 00:13:11.937 "nvme_admin": false, 00:13:11.937 "nvme_io": false, 00:13:11.937 "nvme_io_md": false, 00:13:11.937 "write_zeroes": true, 00:13:11.937 "zcopy": true, 00:13:11.937 "get_zone_info": false, 00:13:11.937 "zone_management": false, 00:13:11.937 "zone_append": false, 00:13:11.937 "compare": false, 00:13:11.937 "compare_and_write": false, 00:13:11.937 "abort": true, 00:13:11.937 "seek_hole": false, 00:13:11.937 "seek_data": false, 00:13:11.937 "copy": true, 00:13:11.937 "nvme_iov_md": false 00:13:11.937 }, 00:13:11.937 "memory_domains": [ 00:13:11.937 { 00:13:11.937 "dma_device_id": "system", 00:13:11.937 "dma_device_type": 1 00:13:11.937 }, 00:13:11.937 { 00:13:11.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.937 "dma_device_type": 2 00:13:11.937 } 00:13:11.937 ], 00:13:11.937 "driver_specific": {} 00:13:11.937 } 00:13:11.937 ] 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.937 "name": "Existed_Raid", 00:13:11.937 "uuid": "34e165e6-5326-4140-8191-46e3f7f86ced", 00:13:11.937 "strip_size_kb": 64, 00:13:11.937 "state": "online", 00:13:11.937 "raid_level": "raid0", 00:13:11.937 "superblock": false, 00:13:11.937 "num_base_bdevs": 3, 00:13:11.937 "num_base_bdevs_discovered": 3, 00:13:11.937 "num_base_bdevs_operational": 3, 00:13:11.937 "base_bdevs_list": [ 00:13:11.937 { 00:13:11.937 "name": "BaseBdev1", 00:13:11.937 "uuid": "9dbde0f3-fead-4cea-a455-322fffcfc5c0", 00:13:11.937 "is_configured": true, 00:13:11.937 "data_offset": 0, 00:13:11.937 "data_size": 65536 00:13:11.937 }, 00:13:11.937 { 00:13:11.937 "name": "BaseBdev2", 00:13:11.937 "uuid": "08e61611-5436-44a6-9e0c-1ee53b31a2ce", 00:13:11.937 "is_configured": true, 00:13:11.937 "data_offset": 0, 00:13:11.937 "data_size": 65536 00:13:11.937 }, 00:13:11.937 { 00:13:11.937 "name": "BaseBdev3", 00:13:11.937 "uuid": "40890da6-c4a1-4436-997d-1829577968a2", 00:13:11.937 "is_configured": true, 00:13:11.937 "data_offset": 0, 00:13:11.937 "data_size": 65536 00:13:11.937 } 00:13:11.937 ] 00:13:11.937 }' 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.937 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.504 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:12.504 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:12.504 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:12.504 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:12.504 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:12.504 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:12.504 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:12.505 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:12.505 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.505 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.505 [2024-11-27 04:34:59.886459] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.505 04:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.505 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:12.505 "name": "Existed_Raid", 00:13:12.505 "aliases": [ 00:13:12.505 "34e165e6-5326-4140-8191-46e3f7f86ced" 00:13:12.505 ], 00:13:12.505 "product_name": "Raid Volume", 00:13:12.505 "block_size": 512, 00:13:12.505 "num_blocks": 196608, 00:13:12.505 "uuid": "34e165e6-5326-4140-8191-46e3f7f86ced", 00:13:12.505 "assigned_rate_limits": { 00:13:12.505 "rw_ios_per_sec": 0, 00:13:12.505 "rw_mbytes_per_sec": 0, 00:13:12.505 "r_mbytes_per_sec": 0, 00:13:12.505 "w_mbytes_per_sec": 0 00:13:12.505 }, 00:13:12.505 "claimed": false, 00:13:12.505 "zoned": false, 00:13:12.505 "supported_io_types": { 00:13:12.505 "read": true, 00:13:12.505 "write": true, 00:13:12.505 "unmap": true, 00:13:12.505 "flush": true, 00:13:12.505 "reset": true, 00:13:12.505 "nvme_admin": false, 00:13:12.505 "nvme_io": false, 00:13:12.505 "nvme_io_md": false, 00:13:12.505 "write_zeroes": true, 00:13:12.505 "zcopy": false, 00:13:12.505 "get_zone_info": false, 00:13:12.505 "zone_management": false, 00:13:12.505 "zone_append": false, 00:13:12.505 "compare": false, 00:13:12.505 "compare_and_write": false, 00:13:12.505 "abort": false, 00:13:12.505 "seek_hole": false, 00:13:12.505 "seek_data": false, 00:13:12.505 "copy": false, 00:13:12.505 "nvme_iov_md": false 00:13:12.505 }, 00:13:12.505 "memory_domains": [ 00:13:12.505 { 00:13:12.505 "dma_device_id": "system", 00:13:12.505 "dma_device_type": 1 00:13:12.505 }, 00:13:12.505 { 00:13:12.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.505 "dma_device_type": 2 00:13:12.505 }, 00:13:12.505 { 00:13:12.505 "dma_device_id": "system", 00:13:12.505 "dma_device_type": 1 00:13:12.505 }, 00:13:12.505 { 00:13:12.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.505 "dma_device_type": 2 00:13:12.505 }, 00:13:12.505 { 00:13:12.505 "dma_device_id": "system", 00:13:12.505 "dma_device_type": 1 00:13:12.505 }, 00:13:12.505 { 00:13:12.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.505 "dma_device_type": 2 00:13:12.505 } 00:13:12.505 ], 00:13:12.505 "driver_specific": { 00:13:12.505 "raid": { 00:13:12.505 "uuid": "34e165e6-5326-4140-8191-46e3f7f86ced", 00:13:12.505 "strip_size_kb": 64, 00:13:12.505 "state": "online", 00:13:12.505 "raid_level": "raid0", 00:13:12.505 "superblock": false, 00:13:12.505 "num_base_bdevs": 3, 00:13:12.505 "num_base_bdevs_discovered": 3, 00:13:12.505 "num_base_bdevs_operational": 3, 00:13:12.505 "base_bdevs_list": [ 00:13:12.505 { 00:13:12.505 "name": "BaseBdev1", 00:13:12.505 "uuid": "9dbde0f3-fead-4cea-a455-322fffcfc5c0", 00:13:12.505 "is_configured": true, 00:13:12.505 "data_offset": 0, 00:13:12.505 "data_size": 65536 00:13:12.505 }, 00:13:12.505 { 00:13:12.505 "name": "BaseBdev2", 00:13:12.505 "uuid": "08e61611-5436-44a6-9e0c-1ee53b31a2ce", 00:13:12.505 "is_configured": true, 00:13:12.505 "data_offset": 0, 00:13:12.505 "data_size": 65536 00:13:12.505 }, 00:13:12.505 { 00:13:12.505 "name": "BaseBdev3", 00:13:12.505 "uuid": "40890da6-c4a1-4436-997d-1829577968a2", 00:13:12.505 "is_configured": true, 00:13:12.505 "data_offset": 0, 00:13:12.505 "data_size": 65536 00:13:12.505 } 00:13:12.505 ] 00:13:12.505 } 00:13:12.505 } 00:13:12.505 }' 00:13:12.505 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:12.505 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:12.505 BaseBdev2 00:13:12.505 BaseBdev3' 00:13:12.505 04:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.505 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.765 [2024-11-27 04:35:00.194193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:12.765 [2024-11-27 04:35:00.194228] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.765 [2024-11-27 04:35:00.194296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.765 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.765 "name": "Existed_Raid", 00:13:12.765 "uuid": "34e165e6-5326-4140-8191-46e3f7f86ced", 00:13:12.765 "strip_size_kb": 64, 00:13:12.765 "state": "offline", 00:13:12.765 "raid_level": "raid0", 00:13:12.765 "superblock": false, 00:13:12.765 "num_base_bdevs": 3, 00:13:12.765 "num_base_bdevs_discovered": 2, 00:13:12.765 "num_base_bdevs_operational": 2, 00:13:12.765 "base_bdevs_list": [ 00:13:12.765 { 00:13:12.765 "name": null, 00:13:12.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.765 "is_configured": false, 00:13:12.765 "data_offset": 0, 00:13:12.765 "data_size": 65536 00:13:12.765 }, 00:13:12.765 { 00:13:12.765 "name": "BaseBdev2", 00:13:12.765 "uuid": "08e61611-5436-44a6-9e0c-1ee53b31a2ce", 00:13:12.765 "is_configured": true, 00:13:12.765 "data_offset": 0, 00:13:12.765 "data_size": 65536 00:13:12.765 }, 00:13:12.765 { 00:13:12.765 "name": "BaseBdev3", 00:13:12.765 "uuid": "40890da6-c4a1-4436-997d-1829577968a2", 00:13:12.765 "is_configured": true, 00:13:12.765 "data_offset": 0, 00:13:12.765 "data_size": 65536 00:13:12.766 } 00:13:12.766 ] 00:13:12.766 }' 00:13:12.766 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.766 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.345 [2024-11-27 04:35:00.837921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:13.345 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.603 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:13.603 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:13.603 04:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:13.603 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.603 04:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.603 [2024-11-27 04:35:00.969305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:13.603 [2024-11-27 04:35:00.969371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.603 BaseBdev2 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.603 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.603 [ 00:13:13.603 { 00:13:13.603 "name": "BaseBdev2", 00:13:13.603 "aliases": [ 00:13:13.603 "6b45899a-a529-44c5-9d4e-98948172966a" 00:13:13.603 ], 00:13:13.604 "product_name": "Malloc disk", 00:13:13.604 "block_size": 512, 00:13:13.604 "num_blocks": 65536, 00:13:13.604 "uuid": "6b45899a-a529-44c5-9d4e-98948172966a", 00:13:13.604 "assigned_rate_limits": { 00:13:13.604 "rw_ios_per_sec": 0, 00:13:13.604 "rw_mbytes_per_sec": 0, 00:13:13.604 "r_mbytes_per_sec": 0, 00:13:13.604 "w_mbytes_per_sec": 0 00:13:13.604 }, 00:13:13.604 "claimed": false, 00:13:13.604 "zoned": false, 00:13:13.604 "supported_io_types": { 00:13:13.604 "read": true, 00:13:13.604 "write": true, 00:13:13.604 "unmap": true, 00:13:13.604 "flush": true, 00:13:13.604 "reset": true, 00:13:13.604 "nvme_admin": false, 00:13:13.604 "nvme_io": false, 00:13:13.604 "nvme_io_md": false, 00:13:13.604 "write_zeroes": true, 00:13:13.604 "zcopy": true, 00:13:13.604 "get_zone_info": false, 00:13:13.604 "zone_management": false, 00:13:13.604 "zone_append": false, 00:13:13.604 "compare": false, 00:13:13.604 "compare_and_write": false, 00:13:13.604 "abort": true, 00:13:13.604 "seek_hole": false, 00:13:13.604 "seek_data": false, 00:13:13.604 "copy": true, 00:13:13.604 "nvme_iov_md": false 00:13:13.604 }, 00:13:13.604 "memory_domains": [ 00:13:13.604 { 00:13:13.604 "dma_device_id": "system", 00:13:13.604 "dma_device_type": 1 00:13:13.604 }, 00:13:13.604 { 00:13:13.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.604 "dma_device_type": 2 00:13:13.604 } 00:13:13.604 ], 00:13:13.604 "driver_specific": {} 00:13:13.604 } 00:13:13.604 ] 00:13:13.604 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.604 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:13.604 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:13.604 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:13.604 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:13.604 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.604 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.862 BaseBdev3 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.862 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.862 [ 00:13:13.862 { 00:13:13.863 "name": "BaseBdev3", 00:13:13.863 "aliases": [ 00:13:13.863 "18593fd8-5c56-4e6e-82cf-0754e2a123ca" 00:13:13.863 ], 00:13:13.863 "product_name": "Malloc disk", 00:13:13.863 "block_size": 512, 00:13:13.863 "num_blocks": 65536, 00:13:13.863 "uuid": "18593fd8-5c56-4e6e-82cf-0754e2a123ca", 00:13:13.863 "assigned_rate_limits": { 00:13:13.863 "rw_ios_per_sec": 0, 00:13:13.863 "rw_mbytes_per_sec": 0, 00:13:13.863 "r_mbytes_per_sec": 0, 00:13:13.863 "w_mbytes_per_sec": 0 00:13:13.863 }, 00:13:13.863 "claimed": false, 00:13:13.863 "zoned": false, 00:13:13.863 "supported_io_types": { 00:13:13.863 "read": true, 00:13:13.863 "write": true, 00:13:13.863 "unmap": true, 00:13:13.863 "flush": true, 00:13:13.863 "reset": true, 00:13:13.863 "nvme_admin": false, 00:13:13.863 "nvme_io": false, 00:13:13.863 "nvme_io_md": false, 00:13:13.863 "write_zeroes": true, 00:13:13.863 "zcopy": true, 00:13:13.863 "get_zone_info": false, 00:13:13.863 "zone_management": false, 00:13:13.863 "zone_append": false, 00:13:13.863 "compare": false, 00:13:13.863 "compare_and_write": false, 00:13:13.863 "abort": true, 00:13:13.863 "seek_hole": false, 00:13:13.863 "seek_data": false, 00:13:13.863 "copy": true, 00:13:13.863 "nvme_iov_md": false 00:13:13.863 }, 00:13:13.863 "memory_domains": [ 00:13:13.863 { 00:13:13.863 "dma_device_id": "system", 00:13:13.863 "dma_device_type": 1 00:13:13.863 }, 00:13:13.863 { 00:13:13.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.863 "dma_device_type": 2 00:13:13.863 } 00:13:13.863 ], 00:13:13.863 "driver_specific": {} 00:13:13.863 } 00:13:13.863 ] 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.863 [2024-11-27 04:35:01.269705] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.863 [2024-11-27 04:35:01.269908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.863 [2024-11-27 04:35:01.270050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.863 [2024-11-27 04:35:01.272485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.863 "name": "Existed_Raid", 00:13:13.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.863 "strip_size_kb": 64, 00:13:13.863 "state": "configuring", 00:13:13.863 "raid_level": "raid0", 00:13:13.863 "superblock": false, 00:13:13.863 "num_base_bdevs": 3, 00:13:13.863 "num_base_bdevs_discovered": 2, 00:13:13.863 "num_base_bdevs_operational": 3, 00:13:13.863 "base_bdevs_list": [ 00:13:13.863 { 00:13:13.863 "name": "BaseBdev1", 00:13:13.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.863 "is_configured": false, 00:13:13.863 "data_offset": 0, 00:13:13.863 "data_size": 0 00:13:13.863 }, 00:13:13.863 { 00:13:13.863 "name": "BaseBdev2", 00:13:13.863 "uuid": "6b45899a-a529-44c5-9d4e-98948172966a", 00:13:13.863 "is_configured": true, 00:13:13.863 "data_offset": 0, 00:13:13.863 "data_size": 65536 00:13:13.863 }, 00:13:13.863 { 00:13:13.863 "name": "BaseBdev3", 00:13:13.863 "uuid": "18593fd8-5c56-4e6e-82cf-0754e2a123ca", 00:13:13.863 "is_configured": true, 00:13:13.863 "data_offset": 0, 00:13:13.863 "data_size": 65536 00:13:13.863 } 00:13:13.863 ] 00:13:13.863 }' 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.863 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.430 [2024-11-27 04:35:01.789920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.430 "name": "Existed_Raid", 00:13:14.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.430 "strip_size_kb": 64, 00:13:14.430 "state": "configuring", 00:13:14.430 "raid_level": "raid0", 00:13:14.430 "superblock": false, 00:13:14.430 "num_base_bdevs": 3, 00:13:14.430 "num_base_bdevs_discovered": 1, 00:13:14.430 "num_base_bdevs_operational": 3, 00:13:14.430 "base_bdevs_list": [ 00:13:14.430 { 00:13:14.430 "name": "BaseBdev1", 00:13:14.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.430 "is_configured": false, 00:13:14.430 "data_offset": 0, 00:13:14.430 "data_size": 0 00:13:14.430 }, 00:13:14.430 { 00:13:14.430 "name": null, 00:13:14.430 "uuid": "6b45899a-a529-44c5-9d4e-98948172966a", 00:13:14.430 "is_configured": false, 00:13:14.430 "data_offset": 0, 00:13:14.430 "data_size": 65536 00:13:14.430 }, 00:13:14.430 { 00:13:14.430 "name": "BaseBdev3", 00:13:14.430 "uuid": "18593fd8-5c56-4e6e-82cf-0754e2a123ca", 00:13:14.430 "is_configured": true, 00:13:14.430 "data_offset": 0, 00:13:14.430 "data_size": 65536 00:13:14.430 } 00:13:14.430 ] 00:13:14.430 }' 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.430 04:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:14.690 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.690 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.690 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.690 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.949 [2024-11-27 04:35:02.364461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.949 BaseBdev1 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.949 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.949 [ 00:13:14.949 { 00:13:14.949 "name": "BaseBdev1", 00:13:14.949 "aliases": [ 00:13:14.949 "3dc7ea68-a4a7-4a13-8345-a046fa8193c0" 00:13:14.949 ], 00:13:14.949 "product_name": "Malloc disk", 00:13:14.949 "block_size": 512, 00:13:14.949 "num_blocks": 65536, 00:13:14.949 "uuid": "3dc7ea68-a4a7-4a13-8345-a046fa8193c0", 00:13:14.949 "assigned_rate_limits": { 00:13:14.949 "rw_ios_per_sec": 0, 00:13:14.949 "rw_mbytes_per_sec": 0, 00:13:14.949 "r_mbytes_per_sec": 0, 00:13:14.949 "w_mbytes_per_sec": 0 00:13:14.949 }, 00:13:14.949 "claimed": true, 00:13:14.949 "claim_type": "exclusive_write", 00:13:14.949 "zoned": false, 00:13:14.949 "supported_io_types": { 00:13:14.949 "read": true, 00:13:14.949 "write": true, 00:13:14.949 "unmap": true, 00:13:14.949 "flush": true, 00:13:14.949 "reset": true, 00:13:14.949 "nvme_admin": false, 00:13:14.949 "nvme_io": false, 00:13:14.949 "nvme_io_md": false, 00:13:14.949 "write_zeroes": true, 00:13:14.949 "zcopy": true, 00:13:14.949 "get_zone_info": false, 00:13:14.949 "zone_management": false, 00:13:14.950 "zone_append": false, 00:13:14.950 "compare": false, 00:13:14.950 "compare_and_write": false, 00:13:14.950 "abort": true, 00:13:14.950 "seek_hole": false, 00:13:14.950 "seek_data": false, 00:13:14.950 "copy": true, 00:13:14.950 "nvme_iov_md": false 00:13:14.950 }, 00:13:14.950 "memory_domains": [ 00:13:14.950 { 00:13:14.950 "dma_device_id": "system", 00:13:14.950 "dma_device_type": 1 00:13:14.950 }, 00:13:14.950 { 00:13:14.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.950 "dma_device_type": 2 00:13:14.950 } 00:13:14.950 ], 00:13:14.950 "driver_specific": {} 00:13:14.950 } 00:13:14.950 ] 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.950 "name": "Existed_Raid", 00:13:14.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.950 "strip_size_kb": 64, 00:13:14.950 "state": "configuring", 00:13:14.950 "raid_level": "raid0", 00:13:14.950 "superblock": false, 00:13:14.950 "num_base_bdevs": 3, 00:13:14.950 "num_base_bdevs_discovered": 2, 00:13:14.950 "num_base_bdevs_operational": 3, 00:13:14.950 "base_bdevs_list": [ 00:13:14.950 { 00:13:14.950 "name": "BaseBdev1", 00:13:14.950 "uuid": "3dc7ea68-a4a7-4a13-8345-a046fa8193c0", 00:13:14.950 "is_configured": true, 00:13:14.950 "data_offset": 0, 00:13:14.950 "data_size": 65536 00:13:14.950 }, 00:13:14.950 { 00:13:14.950 "name": null, 00:13:14.950 "uuid": "6b45899a-a529-44c5-9d4e-98948172966a", 00:13:14.950 "is_configured": false, 00:13:14.950 "data_offset": 0, 00:13:14.950 "data_size": 65536 00:13:14.950 }, 00:13:14.950 { 00:13:14.950 "name": "BaseBdev3", 00:13:14.950 "uuid": "18593fd8-5c56-4e6e-82cf-0754e2a123ca", 00:13:14.950 "is_configured": true, 00:13:14.950 "data_offset": 0, 00:13:14.950 "data_size": 65536 00:13:14.950 } 00:13:14.950 ] 00:13:14.950 }' 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.950 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.517 [2024-11-27 04:35:02.961199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.517 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.518 04:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.518 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.518 "name": "Existed_Raid", 00:13:15.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.518 "strip_size_kb": 64, 00:13:15.518 "state": "configuring", 00:13:15.518 "raid_level": "raid0", 00:13:15.518 "superblock": false, 00:13:15.518 "num_base_bdevs": 3, 00:13:15.518 "num_base_bdevs_discovered": 1, 00:13:15.518 "num_base_bdevs_operational": 3, 00:13:15.518 "base_bdevs_list": [ 00:13:15.518 { 00:13:15.518 "name": "BaseBdev1", 00:13:15.518 "uuid": "3dc7ea68-a4a7-4a13-8345-a046fa8193c0", 00:13:15.518 "is_configured": true, 00:13:15.518 "data_offset": 0, 00:13:15.518 "data_size": 65536 00:13:15.518 }, 00:13:15.518 { 00:13:15.518 "name": null, 00:13:15.518 "uuid": "6b45899a-a529-44c5-9d4e-98948172966a", 00:13:15.518 "is_configured": false, 00:13:15.518 "data_offset": 0, 00:13:15.518 "data_size": 65536 00:13:15.518 }, 00:13:15.518 { 00:13:15.518 "name": null, 00:13:15.518 "uuid": "18593fd8-5c56-4e6e-82cf-0754e2a123ca", 00:13:15.518 "is_configured": false, 00:13:15.518 "data_offset": 0, 00:13:15.518 "data_size": 65536 00:13:15.518 } 00:13:15.518 ] 00:13:15.518 }' 00:13:15.518 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.518 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.179 [2024-11-27 04:35:03.521409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.179 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.179 "name": "Existed_Raid", 00:13:16.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.179 "strip_size_kb": 64, 00:13:16.179 "state": "configuring", 00:13:16.179 "raid_level": "raid0", 00:13:16.179 "superblock": false, 00:13:16.179 "num_base_bdevs": 3, 00:13:16.179 "num_base_bdevs_discovered": 2, 00:13:16.179 "num_base_bdevs_operational": 3, 00:13:16.179 "base_bdevs_list": [ 00:13:16.179 { 00:13:16.179 "name": "BaseBdev1", 00:13:16.179 "uuid": "3dc7ea68-a4a7-4a13-8345-a046fa8193c0", 00:13:16.179 "is_configured": true, 00:13:16.180 "data_offset": 0, 00:13:16.180 "data_size": 65536 00:13:16.180 }, 00:13:16.180 { 00:13:16.180 "name": null, 00:13:16.180 "uuid": "6b45899a-a529-44c5-9d4e-98948172966a", 00:13:16.180 "is_configured": false, 00:13:16.180 "data_offset": 0, 00:13:16.180 "data_size": 65536 00:13:16.180 }, 00:13:16.180 { 00:13:16.180 "name": "BaseBdev3", 00:13:16.180 "uuid": "18593fd8-5c56-4e6e-82cf-0754e2a123ca", 00:13:16.180 "is_configured": true, 00:13:16.180 "data_offset": 0, 00:13:16.180 "data_size": 65536 00:13:16.180 } 00:13:16.180 ] 00:13:16.180 }' 00:13:16.180 04:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.180 04:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.747 [2024-11-27 04:35:04.129561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.747 "name": "Existed_Raid", 00:13:16.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.747 "strip_size_kb": 64, 00:13:16.747 "state": "configuring", 00:13:16.747 "raid_level": "raid0", 00:13:16.747 "superblock": false, 00:13:16.747 "num_base_bdevs": 3, 00:13:16.747 "num_base_bdevs_discovered": 1, 00:13:16.747 "num_base_bdevs_operational": 3, 00:13:16.747 "base_bdevs_list": [ 00:13:16.747 { 00:13:16.747 "name": null, 00:13:16.747 "uuid": "3dc7ea68-a4a7-4a13-8345-a046fa8193c0", 00:13:16.747 "is_configured": false, 00:13:16.747 "data_offset": 0, 00:13:16.747 "data_size": 65536 00:13:16.747 }, 00:13:16.747 { 00:13:16.747 "name": null, 00:13:16.747 "uuid": "6b45899a-a529-44c5-9d4e-98948172966a", 00:13:16.747 "is_configured": false, 00:13:16.747 "data_offset": 0, 00:13:16.747 "data_size": 65536 00:13:16.747 }, 00:13:16.747 { 00:13:16.747 "name": "BaseBdev3", 00:13:16.747 "uuid": "18593fd8-5c56-4e6e-82cf-0754e2a123ca", 00:13:16.747 "is_configured": true, 00:13:16.747 "data_offset": 0, 00:13:16.747 "data_size": 65536 00:13:16.747 } 00:13:16.747 ] 00:13:16.747 }' 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.747 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.314 [2024-11-27 04:35:04.814667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.314 "name": "Existed_Raid", 00:13:17.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.314 "strip_size_kb": 64, 00:13:17.314 "state": "configuring", 00:13:17.314 "raid_level": "raid0", 00:13:17.314 "superblock": false, 00:13:17.314 "num_base_bdevs": 3, 00:13:17.314 "num_base_bdevs_discovered": 2, 00:13:17.314 "num_base_bdevs_operational": 3, 00:13:17.314 "base_bdevs_list": [ 00:13:17.314 { 00:13:17.314 "name": null, 00:13:17.314 "uuid": "3dc7ea68-a4a7-4a13-8345-a046fa8193c0", 00:13:17.314 "is_configured": false, 00:13:17.314 "data_offset": 0, 00:13:17.314 "data_size": 65536 00:13:17.314 }, 00:13:17.314 { 00:13:17.314 "name": "BaseBdev2", 00:13:17.314 "uuid": "6b45899a-a529-44c5-9d4e-98948172966a", 00:13:17.314 "is_configured": true, 00:13:17.314 "data_offset": 0, 00:13:17.314 "data_size": 65536 00:13:17.314 }, 00:13:17.314 { 00:13:17.314 "name": "BaseBdev3", 00:13:17.314 "uuid": "18593fd8-5c56-4e6e-82cf-0754e2a123ca", 00:13:17.314 "is_configured": true, 00:13:17.314 "data_offset": 0, 00:13:17.314 "data_size": 65536 00:13:17.314 } 00:13:17.314 ] 00:13:17.314 }' 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.314 04:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3dc7ea68-a4a7-4a13-8345-a046fa8193c0 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.881 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.141 [2024-11-27 04:35:05.504525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:18.141 [2024-11-27 04:35:05.504571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:18.141 [2024-11-27 04:35:05.504588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:18.141 [2024-11-27 04:35:05.504924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:18.141 [2024-11-27 04:35:05.505120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:18.141 [2024-11-27 04:35:05.505149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:18.141 [2024-11-27 04:35:05.505436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.141 NewBaseBdev 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.141 [ 00:13:18.141 { 00:13:18.141 "name": "NewBaseBdev", 00:13:18.141 "aliases": [ 00:13:18.141 "3dc7ea68-a4a7-4a13-8345-a046fa8193c0" 00:13:18.141 ], 00:13:18.141 "product_name": "Malloc disk", 00:13:18.141 "block_size": 512, 00:13:18.141 "num_blocks": 65536, 00:13:18.141 "uuid": "3dc7ea68-a4a7-4a13-8345-a046fa8193c0", 00:13:18.141 "assigned_rate_limits": { 00:13:18.141 "rw_ios_per_sec": 0, 00:13:18.141 "rw_mbytes_per_sec": 0, 00:13:18.141 "r_mbytes_per_sec": 0, 00:13:18.141 "w_mbytes_per_sec": 0 00:13:18.141 }, 00:13:18.141 "claimed": true, 00:13:18.141 "claim_type": "exclusive_write", 00:13:18.141 "zoned": false, 00:13:18.141 "supported_io_types": { 00:13:18.141 "read": true, 00:13:18.141 "write": true, 00:13:18.141 "unmap": true, 00:13:18.141 "flush": true, 00:13:18.141 "reset": true, 00:13:18.141 "nvme_admin": false, 00:13:18.141 "nvme_io": false, 00:13:18.141 "nvme_io_md": false, 00:13:18.141 "write_zeroes": true, 00:13:18.141 "zcopy": true, 00:13:18.141 "get_zone_info": false, 00:13:18.141 "zone_management": false, 00:13:18.141 "zone_append": false, 00:13:18.141 "compare": false, 00:13:18.141 "compare_and_write": false, 00:13:18.141 "abort": true, 00:13:18.141 "seek_hole": false, 00:13:18.141 "seek_data": false, 00:13:18.141 "copy": true, 00:13:18.141 "nvme_iov_md": false 00:13:18.141 }, 00:13:18.141 "memory_domains": [ 00:13:18.141 { 00:13:18.141 "dma_device_id": "system", 00:13:18.141 "dma_device_type": 1 00:13:18.141 }, 00:13:18.141 { 00:13:18.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.141 "dma_device_type": 2 00:13:18.141 } 00:13:18.141 ], 00:13:18.141 "driver_specific": {} 00:13:18.141 } 00:13:18.141 ] 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:18.141 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.142 "name": "Existed_Raid", 00:13:18.142 "uuid": "da685971-80c5-42e6-9e96-72a480276c36", 00:13:18.142 "strip_size_kb": 64, 00:13:18.142 "state": "online", 00:13:18.142 "raid_level": "raid0", 00:13:18.142 "superblock": false, 00:13:18.142 "num_base_bdevs": 3, 00:13:18.142 "num_base_bdevs_discovered": 3, 00:13:18.142 "num_base_bdevs_operational": 3, 00:13:18.142 "base_bdevs_list": [ 00:13:18.142 { 00:13:18.142 "name": "NewBaseBdev", 00:13:18.142 "uuid": "3dc7ea68-a4a7-4a13-8345-a046fa8193c0", 00:13:18.142 "is_configured": true, 00:13:18.142 "data_offset": 0, 00:13:18.142 "data_size": 65536 00:13:18.142 }, 00:13:18.142 { 00:13:18.142 "name": "BaseBdev2", 00:13:18.142 "uuid": "6b45899a-a529-44c5-9d4e-98948172966a", 00:13:18.142 "is_configured": true, 00:13:18.142 "data_offset": 0, 00:13:18.142 "data_size": 65536 00:13:18.142 }, 00:13:18.142 { 00:13:18.142 "name": "BaseBdev3", 00:13:18.142 "uuid": "18593fd8-5c56-4e6e-82cf-0754e2a123ca", 00:13:18.142 "is_configured": true, 00:13:18.142 "data_offset": 0, 00:13:18.142 "data_size": 65536 00:13:18.142 } 00:13:18.142 ] 00:13:18.142 }' 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.142 04:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.710 [2024-11-27 04:35:06.049110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:18.710 "name": "Existed_Raid", 00:13:18.710 "aliases": [ 00:13:18.710 "da685971-80c5-42e6-9e96-72a480276c36" 00:13:18.710 ], 00:13:18.710 "product_name": "Raid Volume", 00:13:18.710 "block_size": 512, 00:13:18.710 "num_blocks": 196608, 00:13:18.710 "uuid": "da685971-80c5-42e6-9e96-72a480276c36", 00:13:18.710 "assigned_rate_limits": { 00:13:18.710 "rw_ios_per_sec": 0, 00:13:18.710 "rw_mbytes_per_sec": 0, 00:13:18.710 "r_mbytes_per_sec": 0, 00:13:18.710 "w_mbytes_per_sec": 0 00:13:18.710 }, 00:13:18.710 "claimed": false, 00:13:18.710 "zoned": false, 00:13:18.710 "supported_io_types": { 00:13:18.710 "read": true, 00:13:18.710 "write": true, 00:13:18.710 "unmap": true, 00:13:18.710 "flush": true, 00:13:18.710 "reset": true, 00:13:18.710 "nvme_admin": false, 00:13:18.710 "nvme_io": false, 00:13:18.710 "nvme_io_md": false, 00:13:18.710 "write_zeroes": true, 00:13:18.710 "zcopy": false, 00:13:18.710 "get_zone_info": false, 00:13:18.710 "zone_management": false, 00:13:18.710 "zone_append": false, 00:13:18.710 "compare": false, 00:13:18.710 "compare_and_write": false, 00:13:18.710 "abort": false, 00:13:18.710 "seek_hole": false, 00:13:18.710 "seek_data": false, 00:13:18.710 "copy": false, 00:13:18.710 "nvme_iov_md": false 00:13:18.710 }, 00:13:18.710 "memory_domains": [ 00:13:18.710 { 00:13:18.710 "dma_device_id": "system", 00:13:18.710 "dma_device_type": 1 00:13:18.710 }, 00:13:18.710 { 00:13:18.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.710 "dma_device_type": 2 00:13:18.710 }, 00:13:18.710 { 00:13:18.710 "dma_device_id": "system", 00:13:18.710 "dma_device_type": 1 00:13:18.710 }, 00:13:18.710 { 00:13:18.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.710 "dma_device_type": 2 00:13:18.710 }, 00:13:18.710 { 00:13:18.710 "dma_device_id": "system", 00:13:18.710 "dma_device_type": 1 00:13:18.710 }, 00:13:18.710 { 00:13:18.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.710 "dma_device_type": 2 00:13:18.710 } 00:13:18.710 ], 00:13:18.710 "driver_specific": { 00:13:18.710 "raid": { 00:13:18.710 "uuid": "da685971-80c5-42e6-9e96-72a480276c36", 00:13:18.710 "strip_size_kb": 64, 00:13:18.710 "state": "online", 00:13:18.710 "raid_level": "raid0", 00:13:18.710 "superblock": false, 00:13:18.710 "num_base_bdevs": 3, 00:13:18.710 "num_base_bdevs_discovered": 3, 00:13:18.710 "num_base_bdevs_operational": 3, 00:13:18.710 "base_bdevs_list": [ 00:13:18.710 { 00:13:18.710 "name": "NewBaseBdev", 00:13:18.710 "uuid": "3dc7ea68-a4a7-4a13-8345-a046fa8193c0", 00:13:18.710 "is_configured": true, 00:13:18.710 "data_offset": 0, 00:13:18.710 "data_size": 65536 00:13:18.710 }, 00:13:18.710 { 00:13:18.710 "name": "BaseBdev2", 00:13:18.710 "uuid": "6b45899a-a529-44c5-9d4e-98948172966a", 00:13:18.710 "is_configured": true, 00:13:18.710 "data_offset": 0, 00:13:18.710 "data_size": 65536 00:13:18.710 }, 00:13:18.710 { 00:13:18.710 "name": "BaseBdev3", 00:13:18.710 "uuid": "18593fd8-5c56-4e6e-82cf-0754e2a123ca", 00:13:18.710 "is_configured": true, 00:13:18.710 "data_offset": 0, 00:13:18.710 "data_size": 65536 00:13:18.710 } 00:13:18.710 ] 00:13:18.710 } 00:13:18.710 } 00:13:18.710 }' 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.710 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:18.710 BaseBdev2 00:13:18.711 BaseBdev3' 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.711 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.969 [2024-11-27 04:35:06.360796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.969 [2024-11-27 04:35:06.360829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.969 [2024-11-27 04:35:06.360928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.969 [2024-11-27 04:35:06.361004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.969 [2024-11-27 04:35:06.361024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63938 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63938 ']' 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63938 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63938 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63938' 00:13:18.969 killing process with pid 63938 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63938 00:13:18.969 [2024-11-27 04:35:06.403962] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.969 04:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63938 00:13:19.228 [2024-11-27 04:35:06.670892] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:20.224 00:13:20.224 real 0m11.709s 00:13:20.224 user 0m19.462s 00:13:20.224 sys 0m1.562s 00:13:20.224 ************************************ 00:13:20.224 END TEST raid_state_function_test 00:13:20.224 ************************************ 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.224 04:35:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:13:20.224 04:35:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:20.224 04:35:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.224 04:35:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.224 ************************************ 00:13:20.224 START TEST raid_state_function_test_sb 00:13:20.224 ************************************ 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:20.224 Process raid pid: 64576 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64576 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64576' 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64576 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64576 ']' 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.224 04:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.484 [2024-11-27 04:35:07.901583] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:20.484 [2024-11-27 04:35:07.902099] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.484 [2024-11-27 04:35:08.102457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.742 [2024-11-27 04:35:08.262878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.001 [2024-11-27 04:35:08.482854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.001 [2024-11-27 04:35:08.482913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.259 [2024-11-27 04:35:08.865397] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.259 [2024-11-27 04:35:08.865464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.259 [2024-11-27 04:35:08.865482] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.259 [2024-11-27 04:35:08.865498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.259 [2024-11-27 04:35:08.865508] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.259 [2024-11-27 04:35:08.865523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.259 04:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.517 04:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.517 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.517 "name": "Existed_Raid", 00:13:21.517 "uuid": "8d6ee1d0-6f13-4dc5-bc14-88e903637689", 00:13:21.517 "strip_size_kb": 64, 00:13:21.517 "state": "configuring", 00:13:21.517 "raid_level": "raid0", 00:13:21.517 "superblock": true, 00:13:21.517 "num_base_bdevs": 3, 00:13:21.517 "num_base_bdevs_discovered": 0, 00:13:21.517 "num_base_bdevs_operational": 3, 00:13:21.517 "base_bdevs_list": [ 00:13:21.517 { 00:13:21.517 "name": "BaseBdev1", 00:13:21.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.517 "is_configured": false, 00:13:21.517 "data_offset": 0, 00:13:21.517 "data_size": 0 00:13:21.517 }, 00:13:21.517 { 00:13:21.517 "name": "BaseBdev2", 00:13:21.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.517 "is_configured": false, 00:13:21.517 "data_offset": 0, 00:13:21.517 "data_size": 0 00:13:21.517 }, 00:13:21.517 { 00:13:21.517 "name": "BaseBdev3", 00:13:21.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.517 "is_configured": false, 00:13:21.517 "data_offset": 0, 00:13:21.517 "data_size": 0 00:13:21.517 } 00:13:21.517 ] 00:13:21.517 }' 00:13:21.517 04:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.517 04:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.774 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.774 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.774 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.774 [2024-11-27 04:35:09.385447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.774 [2024-11-27 04:35:09.385493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:21.774 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.774 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:21.774 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.774 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.774 [2024-11-27 04:35:09.393443] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.774 [2024-11-27 04:35:09.393501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.774 [2024-11-27 04:35:09.393518] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.774 [2024-11-27 04:35:09.393544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.774 [2024-11-27 04:35:09.393554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.774 [2024-11-27 04:35:09.393568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.032 [2024-11-27 04:35:09.438742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.032 BaseBdev1 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.032 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.032 [ 00:13:22.032 { 00:13:22.032 "name": "BaseBdev1", 00:13:22.032 "aliases": [ 00:13:22.032 "334e325c-86e5-47af-a2c5-1db5c9638b4d" 00:13:22.032 ], 00:13:22.032 "product_name": "Malloc disk", 00:13:22.032 "block_size": 512, 00:13:22.032 "num_blocks": 65536, 00:13:22.032 "uuid": "334e325c-86e5-47af-a2c5-1db5c9638b4d", 00:13:22.032 "assigned_rate_limits": { 00:13:22.032 "rw_ios_per_sec": 0, 00:13:22.032 "rw_mbytes_per_sec": 0, 00:13:22.032 "r_mbytes_per_sec": 0, 00:13:22.032 "w_mbytes_per_sec": 0 00:13:22.032 }, 00:13:22.032 "claimed": true, 00:13:22.032 "claim_type": "exclusive_write", 00:13:22.032 "zoned": false, 00:13:22.032 "supported_io_types": { 00:13:22.032 "read": true, 00:13:22.032 "write": true, 00:13:22.032 "unmap": true, 00:13:22.032 "flush": true, 00:13:22.032 "reset": true, 00:13:22.032 "nvme_admin": false, 00:13:22.032 "nvme_io": false, 00:13:22.032 "nvme_io_md": false, 00:13:22.032 "write_zeroes": true, 00:13:22.032 "zcopy": true, 00:13:22.032 "get_zone_info": false, 00:13:22.033 "zone_management": false, 00:13:22.033 "zone_append": false, 00:13:22.033 "compare": false, 00:13:22.033 "compare_and_write": false, 00:13:22.033 "abort": true, 00:13:22.033 "seek_hole": false, 00:13:22.033 "seek_data": false, 00:13:22.033 "copy": true, 00:13:22.033 "nvme_iov_md": false 00:13:22.033 }, 00:13:22.033 "memory_domains": [ 00:13:22.033 { 00:13:22.033 "dma_device_id": "system", 00:13:22.033 "dma_device_type": 1 00:13:22.033 }, 00:13:22.033 { 00:13:22.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.033 "dma_device_type": 2 00:13:22.033 } 00:13:22.033 ], 00:13:22.033 "driver_specific": {} 00:13:22.033 } 00:13:22.033 ] 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.033 "name": "Existed_Raid", 00:13:22.033 "uuid": "11b4b4b8-55bf-467c-b7ce-af00dbc43535", 00:13:22.033 "strip_size_kb": 64, 00:13:22.033 "state": "configuring", 00:13:22.033 "raid_level": "raid0", 00:13:22.033 "superblock": true, 00:13:22.033 "num_base_bdevs": 3, 00:13:22.033 "num_base_bdevs_discovered": 1, 00:13:22.033 "num_base_bdevs_operational": 3, 00:13:22.033 "base_bdevs_list": [ 00:13:22.033 { 00:13:22.033 "name": "BaseBdev1", 00:13:22.033 "uuid": "334e325c-86e5-47af-a2c5-1db5c9638b4d", 00:13:22.033 "is_configured": true, 00:13:22.033 "data_offset": 2048, 00:13:22.033 "data_size": 63488 00:13:22.033 }, 00:13:22.033 { 00:13:22.033 "name": "BaseBdev2", 00:13:22.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.033 "is_configured": false, 00:13:22.033 "data_offset": 0, 00:13:22.033 "data_size": 0 00:13:22.033 }, 00:13:22.033 { 00:13:22.033 "name": "BaseBdev3", 00:13:22.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.033 "is_configured": false, 00:13:22.033 "data_offset": 0, 00:13:22.033 "data_size": 0 00:13:22.033 } 00:13:22.033 ] 00:13:22.033 }' 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.033 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.598 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.599 [2024-11-27 04:35:09.970932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.599 [2024-11-27 04:35:09.970999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.599 [2024-11-27 04:35:09.978994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.599 [2024-11-27 04:35:09.981560] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.599 [2024-11-27 04:35:09.981618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.599 [2024-11-27 04:35:09.981642] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.599 [2024-11-27 04:35:09.981657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.599 04:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.599 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.599 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.599 "name": "Existed_Raid", 00:13:22.599 "uuid": "8a1c906b-a8ed-48e1-8e3b-1549c73ce46c", 00:13:22.599 "strip_size_kb": 64, 00:13:22.599 "state": "configuring", 00:13:22.599 "raid_level": "raid0", 00:13:22.599 "superblock": true, 00:13:22.599 "num_base_bdevs": 3, 00:13:22.599 "num_base_bdevs_discovered": 1, 00:13:22.599 "num_base_bdevs_operational": 3, 00:13:22.599 "base_bdevs_list": [ 00:13:22.599 { 00:13:22.599 "name": "BaseBdev1", 00:13:22.599 "uuid": "334e325c-86e5-47af-a2c5-1db5c9638b4d", 00:13:22.599 "is_configured": true, 00:13:22.599 "data_offset": 2048, 00:13:22.599 "data_size": 63488 00:13:22.599 }, 00:13:22.599 { 00:13:22.599 "name": "BaseBdev2", 00:13:22.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.599 "is_configured": false, 00:13:22.599 "data_offset": 0, 00:13:22.599 "data_size": 0 00:13:22.599 }, 00:13:22.599 { 00:13:22.599 "name": "BaseBdev3", 00:13:22.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.599 "is_configured": false, 00:13:22.599 "data_offset": 0, 00:13:22.599 "data_size": 0 00:13:22.599 } 00:13:22.599 ] 00:13:22.599 }' 00:13:22.599 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.599 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.856 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.857 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.857 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.116 [2024-11-27 04:35:10.510136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.116 BaseBdev2 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.116 [ 00:13:23.116 { 00:13:23.116 "name": "BaseBdev2", 00:13:23.116 "aliases": [ 00:13:23.116 "c7afff6f-3724-4439-8807-7f9531647d1d" 00:13:23.116 ], 00:13:23.116 "product_name": "Malloc disk", 00:13:23.116 "block_size": 512, 00:13:23.116 "num_blocks": 65536, 00:13:23.116 "uuid": "c7afff6f-3724-4439-8807-7f9531647d1d", 00:13:23.116 "assigned_rate_limits": { 00:13:23.116 "rw_ios_per_sec": 0, 00:13:23.116 "rw_mbytes_per_sec": 0, 00:13:23.116 "r_mbytes_per_sec": 0, 00:13:23.116 "w_mbytes_per_sec": 0 00:13:23.116 }, 00:13:23.116 "claimed": true, 00:13:23.116 "claim_type": "exclusive_write", 00:13:23.116 "zoned": false, 00:13:23.116 "supported_io_types": { 00:13:23.116 "read": true, 00:13:23.116 "write": true, 00:13:23.116 "unmap": true, 00:13:23.116 "flush": true, 00:13:23.116 "reset": true, 00:13:23.116 "nvme_admin": false, 00:13:23.116 "nvme_io": false, 00:13:23.116 "nvme_io_md": false, 00:13:23.116 "write_zeroes": true, 00:13:23.116 "zcopy": true, 00:13:23.116 "get_zone_info": false, 00:13:23.116 "zone_management": false, 00:13:23.116 "zone_append": false, 00:13:23.116 "compare": false, 00:13:23.116 "compare_and_write": false, 00:13:23.116 "abort": true, 00:13:23.116 "seek_hole": false, 00:13:23.116 "seek_data": false, 00:13:23.116 "copy": true, 00:13:23.116 "nvme_iov_md": false 00:13:23.116 }, 00:13:23.116 "memory_domains": [ 00:13:23.116 { 00:13:23.116 "dma_device_id": "system", 00:13:23.116 "dma_device_type": 1 00:13:23.116 }, 00:13:23.116 { 00:13:23.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.116 "dma_device_type": 2 00:13:23.116 } 00:13:23.116 ], 00:13:23.116 "driver_specific": {} 00:13:23.116 } 00:13:23.116 ] 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.116 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.116 "name": "Existed_Raid", 00:13:23.116 "uuid": "8a1c906b-a8ed-48e1-8e3b-1549c73ce46c", 00:13:23.116 "strip_size_kb": 64, 00:13:23.116 "state": "configuring", 00:13:23.116 "raid_level": "raid0", 00:13:23.116 "superblock": true, 00:13:23.116 "num_base_bdevs": 3, 00:13:23.116 "num_base_bdevs_discovered": 2, 00:13:23.116 "num_base_bdevs_operational": 3, 00:13:23.116 "base_bdevs_list": [ 00:13:23.116 { 00:13:23.116 "name": "BaseBdev1", 00:13:23.116 "uuid": "334e325c-86e5-47af-a2c5-1db5c9638b4d", 00:13:23.116 "is_configured": true, 00:13:23.116 "data_offset": 2048, 00:13:23.116 "data_size": 63488 00:13:23.116 }, 00:13:23.116 { 00:13:23.116 "name": "BaseBdev2", 00:13:23.116 "uuid": "c7afff6f-3724-4439-8807-7f9531647d1d", 00:13:23.116 "is_configured": true, 00:13:23.116 "data_offset": 2048, 00:13:23.116 "data_size": 63488 00:13:23.116 }, 00:13:23.116 { 00:13:23.117 "name": "BaseBdev3", 00:13:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.117 "is_configured": false, 00:13:23.117 "data_offset": 0, 00:13:23.117 "data_size": 0 00:13:23.117 } 00:13:23.117 ] 00:13:23.117 }' 00:13:23.117 04:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.117 04:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.683 [2024-11-27 04:35:11.110060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.683 [2024-11-27 04:35:11.110613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:23.683 [2024-11-27 04:35:11.110649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:23.683 [2024-11-27 04:35:11.111014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:23.683 [2024-11-27 04:35:11.111223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:23.683 BaseBdev3 00:13:23.683 [2024-11-27 04:35:11.111363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:23.683 [2024-11-27 04:35:11.111630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.683 [ 00:13:23.683 { 00:13:23.683 "name": "BaseBdev3", 00:13:23.683 "aliases": [ 00:13:23.683 "c582173b-dad9-4ce7-8cc0-3705c827821d" 00:13:23.683 ], 00:13:23.683 "product_name": "Malloc disk", 00:13:23.683 "block_size": 512, 00:13:23.683 "num_blocks": 65536, 00:13:23.683 "uuid": "c582173b-dad9-4ce7-8cc0-3705c827821d", 00:13:23.683 "assigned_rate_limits": { 00:13:23.683 "rw_ios_per_sec": 0, 00:13:23.683 "rw_mbytes_per_sec": 0, 00:13:23.683 "r_mbytes_per_sec": 0, 00:13:23.683 "w_mbytes_per_sec": 0 00:13:23.683 }, 00:13:23.683 "claimed": true, 00:13:23.683 "claim_type": "exclusive_write", 00:13:23.683 "zoned": false, 00:13:23.683 "supported_io_types": { 00:13:23.683 "read": true, 00:13:23.683 "write": true, 00:13:23.683 "unmap": true, 00:13:23.683 "flush": true, 00:13:23.683 "reset": true, 00:13:23.683 "nvme_admin": false, 00:13:23.683 "nvme_io": false, 00:13:23.683 "nvme_io_md": false, 00:13:23.683 "write_zeroes": true, 00:13:23.683 "zcopy": true, 00:13:23.683 "get_zone_info": false, 00:13:23.683 "zone_management": false, 00:13:23.683 "zone_append": false, 00:13:23.683 "compare": false, 00:13:23.683 "compare_and_write": false, 00:13:23.683 "abort": true, 00:13:23.683 "seek_hole": false, 00:13:23.683 "seek_data": false, 00:13:23.683 "copy": true, 00:13:23.683 "nvme_iov_md": false 00:13:23.683 }, 00:13:23.683 "memory_domains": [ 00:13:23.683 { 00:13:23.683 "dma_device_id": "system", 00:13:23.683 "dma_device_type": 1 00:13:23.683 }, 00:13:23.683 { 00:13:23.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.683 "dma_device_type": 2 00:13:23.683 } 00:13:23.683 ], 00:13:23.683 "driver_specific": {} 00:13:23.683 } 00:13:23.683 ] 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.683 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.684 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.684 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.684 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.684 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.684 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.684 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.684 "name": "Existed_Raid", 00:13:23.684 "uuid": "8a1c906b-a8ed-48e1-8e3b-1549c73ce46c", 00:13:23.684 "strip_size_kb": 64, 00:13:23.684 "state": "online", 00:13:23.684 "raid_level": "raid0", 00:13:23.684 "superblock": true, 00:13:23.684 "num_base_bdevs": 3, 00:13:23.684 "num_base_bdevs_discovered": 3, 00:13:23.684 "num_base_bdevs_operational": 3, 00:13:23.684 "base_bdevs_list": [ 00:13:23.684 { 00:13:23.684 "name": "BaseBdev1", 00:13:23.684 "uuid": "334e325c-86e5-47af-a2c5-1db5c9638b4d", 00:13:23.684 "is_configured": true, 00:13:23.684 "data_offset": 2048, 00:13:23.684 "data_size": 63488 00:13:23.684 }, 00:13:23.684 { 00:13:23.684 "name": "BaseBdev2", 00:13:23.684 "uuid": "c7afff6f-3724-4439-8807-7f9531647d1d", 00:13:23.684 "is_configured": true, 00:13:23.684 "data_offset": 2048, 00:13:23.684 "data_size": 63488 00:13:23.684 }, 00:13:23.684 { 00:13:23.684 "name": "BaseBdev3", 00:13:23.684 "uuid": "c582173b-dad9-4ce7-8cc0-3705c827821d", 00:13:23.684 "is_configured": true, 00:13:23.684 "data_offset": 2048, 00:13:23.684 "data_size": 63488 00:13:23.684 } 00:13:23.684 ] 00:13:23.684 }' 00:13:23.684 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.684 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.253 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:24.253 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:24.253 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.253 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.253 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.253 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.253 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:24.253 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.253 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.254 [2024-11-27 04:35:11.662664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.254 "name": "Existed_Raid", 00:13:24.254 "aliases": [ 00:13:24.254 "8a1c906b-a8ed-48e1-8e3b-1549c73ce46c" 00:13:24.254 ], 00:13:24.254 "product_name": "Raid Volume", 00:13:24.254 "block_size": 512, 00:13:24.254 "num_blocks": 190464, 00:13:24.254 "uuid": "8a1c906b-a8ed-48e1-8e3b-1549c73ce46c", 00:13:24.254 "assigned_rate_limits": { 00:13:24.254 "rw_ios_per_sec": 0, 00:13:24.254 "rw_mbytes_per_sec": 0, 00:13:24.254 "r_mbytes_per_sec": 0, 00:13:24.254 "w_mbytes_per_sec": 0 00:13:24.254 }, 00:13:24.254 "claimed": false, 00:13:24.254 "zoned": false, 00:13:24.254 "supported_io_types": { 00:13:24.254 "read": true, 00:13:24.254 "write": true, 00:13:24.254 "unmap": true, 00:13:24.254 "flush": true, 00:13:24.254 "reset": true, 00:13:24.254 "nvme_admin": false, 00:13:24.254 "nvme_io": false, 00:13:24.254 "nvme_io_md": false, 00:13:24.254 "write_zeroes": true, 00:13:24.254 "zcopy": false, 00:13:24.254 "get_zone_info": false, 00:13:24.254 "zone_management": false, 00:13:24.254 "zone_append": false, 00:13:24.254 "compare": false, 00:13:24.254 "compare_and_write": false, 00:13:24.254 "abort": false, 00:13:24.254 "seek_hole": false, 00:13:24.254 "seek_data": false, 00:13:24.254 "copy": false, 00:13:24.254 "nvme_iov_md": false 00:13:24.254 }, 00:13:24.254 "memory_domains": [ 00:13:24.254 { 00:13:24.254 "dma_device_id": "system", 00:13:24.254 "dma_device_type": 1 00:13:24.254 }, 00:13:24.254 { 00:13:24.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.254 "dma_device_type": 2 00:13:24.254 }, 00:13:24.254 { 00:13:24.254 "dma_device_id": "system", 00:13:24.254 "dma_device_type": 1 00:13:24.254 }, 00:13:24.254 { 00:13:24.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.254 "dma_device_type": 2 00:13:24.254 }, 00:13:24.254 { 00:13:24.254 "dma_device_id": "system", 00:13:24.254 "dma_device_type": 1 00:13:24.254 }, 00:13:24.254 { 00:13:24.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.254 "dma_device_type": 2 00:13:24.254 } 00:13:24.254 ], 00:13:24.254 "driver_specific": { 00:13:24.254 "raid": { 00:13:24.254 "uuid": "8a1c906b-a8ed-48e1-8e3b-1549c73ce46c", 00:13:24.254 "strip_size_kb": 64, 00:13:24.254 "state": "online", 00:13:24.254 "raid_level": "raid0", 00:13:24.254 "superblock": true, 00:13:24.254 "num_base_bdevs": 3, 00:13:24.254 "num_base_bdevs_discovered": 3, 00:13:24.254 "num_base_bdevs_operational": 3, 00:13:24.254 "base_bdevs_list": [ 00:13:24.254 { 00:13:24.254 "name": "BaseBdev1", 00:13:24.254 "uuid": "334e325c-86e5-47af-a2c5-1db5c9638b4d", 00:13:24.254 "is_configured": true, 00:13:24.254 "data_offset": 2048, 00:13:24.254 "data_size": 63488 00:13:24.254 }, 00:13:24.254 { 00:13:24.254 "name": "BaseBdev2", 00:13:24.254 "uuid": "c7afff6f-3724-4439-8807-7f9531647d1d", 00:13:24.254 "is_configured": true, 00:13:24.254 "data_offset": 2048, 00:13:24.254 "data_size": 63488 00:13:24.254 }, 00:13:24.254 { 00:13:24.254 "name": "BaseBdev3", 00:13:24.254 "uuid": "c582173b-dad9-4ce7-8cc0-3705c827821d", 00:13:24.254 "is_configured": true, 00:13:24.254 "data_offset": 2048, 00:13:24.254 "data_size": 63488 00:13:24.254 } 00:13:24.254 ] 00:13:24.254 } 00:13:24.254 } 00:13:24.254 }' 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:24.254 BaseBdev2 00:13:24.254 BaseBdev3' 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.254 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.513 04:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.513 [2024-11-27 04:35:11.998423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.513 [2024-11-27 04:35:11.998460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.513 [2024-11-27 04:35:11.998534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.513 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.771 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.771 "name": "Existed_Raid", 00:13:24.771 "uuid": "8a1c906b-a8ed-48e1-8e3b-1549c73ce46c", 00:13:24.771 "strip_size_kb": 64, 00:13:24.771 "state": "offline", 00:13:24.771 "raid_level": "raid0", 00:13:24.771 "superblock": true, 00:13:24.771 "num_base_bdevs": 3, 00:13:24.771 "num_base_bdevs_discovered": 2, 00:13:24.771 "num_base_bdevs_operational": 2, 00:13:24.771 "base_bdevs_list": [ 00:13:24.771 { 00:13:24.771 "name": null, 00:13:24.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.771 "is_configured": false, 00:13:24.771 "data_offset": 0, 00:13:24.771 "data_size": 63488 00:13:24.771 }, 00:13:24.771 { 00:13:24.771 "name": "BaseBdev2", 00:13:24.771 "uuid": "c7afff6f-3724-4439-8807-7f9531647d1d", 00:13:24.771 "is_configured": true, 00:13:24.771 "data_offset": 2048, 00:13:24.771 "data_size": 63488 00:13:24.771 }, 00:13:24.771 { 00:13:24.771 "name": "BaseBdev3", 00:13:24.771 "uuid": "c582173b-dad9-4ce7-8cc0-3705c827821d", 00:13:24.771 "is_configured": true, 00:13:24.771 "data_offset": 2048, 00:13:24.771 "data_size": 63488 00:13:24.771 } 00:13:24.771 ] 00:13:24.771 }' 00:13:24.771 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.771 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.030 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:25.030 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.030 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.030 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.030 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.030 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.030 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.288 [2024-11-27 04:35:12.664117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.288 [2024-11-27 04:35:12.812988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:25.288 [2024-11-27 04:35:12.813055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.288 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.546 BaseBdev2 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.546 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.547 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:25.547 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.547 04:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.547 [ 00:13:25.547 { 00:13:25.547 "name": "BaseBdev2", 00:13:25.547 "aliases": [ 00:13:25.547 "91a88d78-578f-40c8-9bf3-6de14816fc80" 00:13:25.547 ], 00:13:25.547 "product_name": "Malloc disk", 00:13:25.547 "block_size": 512, 00:13:25.547 "num_blocks": 65536, 00:13:25.547 "uuid": "91a88d78-578f-40c8-9bf3-6de14816fc80", 00:13:25.547 "assigned_rate_limits": { 00:13:25.547 "rw_ios_per_sec": 0, 00:13:25.547 "rw_mbytes_per_sec": 0, 00:13:25.547 "r_mbytes_per_sec": 0, 00:13:25.547 "w_mbytes_per_sec": 0 00:13:25.547 }, 00:13:25.547 "claimed": false, 00:13:25.547 "zoned": false, 00:13:25.547 "supported_io_types": { 00:13:25.547 "read": true, 00:13:25.547 "write": true, 00:13:25.547 "unmap": true, 00:13:25.547 "flush": true, 00:13:25.547 "reset": true, 00:13:25.547 "nvme_admin": false, 00:13:25.547 "nvme_io": false, 00:13:25.547 "nvme_io_md": false, 00:13:25.547 "write_zeroes": true, 00:13:25.547 "zcopy": true, 00:13:25.547 "get_zone_info": false, 00:13:25.547 "zone_management": false, 00:13:25.547 "zone_append": false, 00:13:25.547 "compare": false, 00:13:25.547 "compare_and_write": false, 00:13:25.547 "abort": true, 00:13:25.547 "seek_hole": false, 00:13:25.547 "seek_data": false, 00:13:25.547 "copy": true, 00:13:25.547 "nvme_iov_md": false 00:13:25.547 }, 00:13:25.547 "memory_domains": [ 00:13:25.547 { 00:13:25.547 "dma_device_id": "system", 00:13:25.547 "dma_device_type": 1 00:13:25.547 }, 00:13:25.547 { 00:13:25.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.547 "dma_device_type": 2 00:13:25.547 } 00:13:25.547 ], 00:13:25.547 "driver_specific": {} 00:13:25.547 } 00:13:25.547 ] 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.547 BaseBdev3 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.547 [ 00:13:25.547 { 00:13:25.547 "name": "BaseBdev3", 00:13:25.547 "aliases": [ 00:13:25.547 "562061a6-54c2-428d-a912-d140d67d3093" 00:13:25.547 ], 00:13:25.547 "product_name": "Malloc disk", 00:13:25.547 "block_size": 512, 00:13:25.547 "num_blocks": 65536, 00:13:25.547 "uuid": "562061a6-54c2-428d-a912-d140d67d3093", 00:13:25.547 "assigned_rate_limits": { 00:13:25.547 "rw_ios_per_sec": 0, 00:13:25.547 "rw_mbytes_per_sec": 0, 00:13:25.547 "r_mbytes_per_sec": 0, 00:13:25.547 "w_mbytes_per_sec": 0 00:13:25.547 }, 00:13:25.547 "claimed": false, 00:13:25.547 "zoned": false, 00:13:25.547 "supported_io_types": { 00:13:25.547 "read": true, 00:13:25.547 "write": true, 00:13:25.547 "unmap": true, 00:13:25.547 "flush": true, 00:13:25.547 "reset": true, 00:13:25.547 "nvme_admin": false, 00:13:25.547 "nvme_io": false, 00:13:25.547 "nvme_io_md": false, 00:13:25.547 "write_zeroes": true, 00:13:25.547 "zcopy": true, 00:13:25.547 "get_zone_info": false, 00:13:25.547 "zone_management": false, 00:13:25.547 "zone_append": false, 00:13:25.547 "compare": false, 00:13:25.547 "compare_and_write": false, 00:13:25.547 "abort": true, 00:13:25.547 "seek_hole": false, 00:13:25.547 "seek_data": false, 00:13:25.547 "copy": true, 00:13:25.547 "nvme_iov_md": false 00:13:25.547 }, 00:13:25.547 "memory_domains": [ 00:13:25.547 { 00:13:25.547 "dma_device_id": "system", 00:13:25.547 "dma_device_type": 1 00:13:25.547 }, 00:13:25.547 { 00:13:25.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.547 "dma_device_type": 2 00:13:25.547 } 00:13:25.547 ], 00:13:25.547 "driver_specific": {} 00:13:25.547 } 00:13:25.547 ] 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.547 [2024-11-27 04:35:13.103416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.547 [2024-11-27 04:35:13.103599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.547 [2024-11-27 04:35:13.103733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.547 [2024-11-27 04:35:13.106251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.547 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.807 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.807 "name": "Existed_Raid", 00:13:25.807 "uuid": "bf39684b-e35e-4842-ac47-466a7986e420", 00:13:25.807 "strip_size_kb": 64, 00:13:25.807 "state": "configuring", 00:13:25.807 "raid_level": "raid0", 00:13:25.807 "superblock": true, 00:13:25.807 "num_base_bdevs": 3, 00:13:25.807 "num_base_bdevs_discovered": 2, 00:13:25.807 "num_base_bdevs_operational": 3, 00:13:25.807 "base_bdevs_list": [ 00:13:25.807 { 00:13:25.807 "name": "BaseBdev1", 00:13:25.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.807 "is_configured": false, 00:13:25.807 "data_offset": 0, 00:13:25.807 "data_size": 0 00:13:25.807 }, 00:13:25.807 { 00:13:25.807 "name": "BaseBdev2", 00:13:25.807 "uuid": "91a88d78-578f-40c8-9bf3-6de14816fc80", 00:13:25.807 "is_configured": true, 00:13:25.807 "data_offset": 2048, 00:13:25.807 "data_size": 63488 00:13:25.807 }, 00:13:25.807 { 00:13:25.807 "name": "BaseBdev3", 00:13:25.807 "uuid": "562061a6-54c2-428d-a912-d140d67d3093", 00:13:25.807 "is_configured": true, 00:13:25.807 "data_offset": 2048, 00:13:25.807 "data_size": 63488 00:13:25.807 } 00:13:25.807 ] 00:13:25.807 }' 00:13:25.807 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.807 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.066 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:26.066 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.066 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.066 [2024-11-27 04:35:13.615568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.066 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.066 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:26.066 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.066 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.066 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:26.066 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.067 "name": "Existed_Raid", 00:13:26.067 "uuid": "bf39684b-e35e-4842-ac47-466a7986e420", 00:13:26.067 "strip_size_kb": 64, 00:13:26.067 "state": "configuring", 00:13:26.067 "raid_level": "raid0", 00:13:26.067 "superblock": true, 00:13:26.067 "num_base_bdevs": 3, 00:13:26.067 "num_base_bdevs_discovered": 1, 00:13:26.067 "num_base_bdevs_operational": 3, 00:13:26.067 "base_bdevs_list": [ 00:13:26.067 { 00:13:26.067 "name": "BaseBdev1", 00:13:26.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.067 "is_configured": false, 00:13:26.067 "data_offset": 0, 00:13:26.067 "data_size": 0 00:13:26.067 }, 00:13:26.067 { 00:13:26.067 "name": null, 00:13:26.067 "uuid": "91a88d78-578f-40c8-9bf3-6de14816fc80", 00:13:26.067 "is_configured": false, 00:13:26.067 "data_offset": 0, 00:13:26.067 "data_size": 63488 00:13:26.067 }, 00:13:26.067 { 00:13:26.067 "name": "BaseBdev3", 00:13:26.067 "uuid": "562061a6-54c2-428d-a912-d140d67d3093", 00:13:26.067 "is_configured": true, 00:13:26.067 "data_offset": 2048, 00:13:26.067 "data_size": 63488 00:13:26.067 } 00:13:26.067 ] 00:13:26.067 }' 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.067 04:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.634 [2024-11-27 04:35:14.169572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.634 BaseBdev1 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.634 [ 00:13:26.634 { 00:13:26.634 "name": "BaseBdev1", 00:13:26.634 "aliases": [ 00:13:26.634 "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe" 00:13:26.634 ], 00:13:26.634 "product_name": "Malloc disk", 00:13:26.634 "block_size": 512, 00:13:26.634 "num_blocks": 65536, 00:13:26.634 "uuid": "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe", 00:13:26.634 "assigned_rate_limits": { 00:13:26.634 "rw_ios_per_sec": 0, 00:13:26.634 "rw_mbytes_per_sec": 0, 00:13:26.634 "r_mbytes_per_sec": 0, 00:13:26.634 "w_mbytes_per_sec": 0 00:13:26.634 }, 00:13:26.634 "claimed": true, 00:13:26.634 "claim_type": "exclusive_write", 00:13:26.634 "zoned": false, 00:13:26.634 "supported_io_types": { 00:13:26.634 "read": true, 00:13:26.634 "write": true, 00:13:26.634 "unmap": true, 00:13:26.634 "flush": true, 00:13:26.634 "reset": true, 00:13:26.634 "nvme_admin": false, 00:13:26.634 "nvme_io": false, 00:13:26.634 "nvme_io_md": false, 00:13:26.634 "write_zeroes": true, 00:13:26.634 "zcopy": true, 00:13:26.634 "get_zone_info": false, 00:13:26.634 "zone_management": false, 00:13:26.634 "zone_append": false, 00:13:26.634 "compare": false, 00:13:26.634 "compare_and_write": false, 00:13:26.634 "abort": true, 00:13:26.634 "seek_hole": false, 00:13:26.634 "seek_data": false, 00:13:26.634 "copy": true, 00:13:26.634 "nvme_iov_md": false 00:13:26.634 }, 00:13:26.634 "memory_domains": [ 00:13:26.634 { 00:13:26.634 "dma_device_id": "system", 00:13:26.634 "dma_device_type": 1 00:13:26.634 }, 00:13:26.634 { 00:13:26.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.634 "dma_device_type": 2 00:13:26.634 } 00:13:26.634 ], 00:13:26.634 "driver_specific": {} 00:13:26.634 } 00:13:26.634 ] 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.634 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.893 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.893 "name": "Existed_Raid", 00:13:26.893 "uuid": "bf39684b-e35e-4842-ac47-466a7986e420", 00:13:26.893 "strip_size_kb": 64, 00:13:26.893 "state": "configuring", 00:13:26.893 "raid_level": "raid0", 00:13:26.893 "superblock": true, 00:13:26.893 "num_base_bdevs": 3, 00:13:26.893 "num_base_bdevs_discovered": 2, 00:13:26.893 "num_base_bdevs_operational": 3, 00:13:26.893 "base_bdevs_list": [ 00:13:26.893 { 00:13:26.893 "name": "BaseBdev1", 00:13:26.893 "uuid": "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe", 00:13:26.893 "is_configured": true, 00:13:26.893 "data_offset": 2048, 00:13:26.893 "data_size": 63488 00:13:26.893 }, 00:13:26.893 { 00:13:26.893 "name": null, 00:13:26.893 "uuid": "91a88d78-578f-40c8-9bf3-6de14816fc80", 00:13:26.893 "is_configured": false, 00:13:26.893 "data_offset": 0, 00:13:26.893 "data_size": 63488 00:13:26.893 }, 00:13:26.893 { 00:13:26.893 "name": "BaseBdev3", 00:13:26.893 "uuid": "562061a6-54c2-428d-a912-d140d67d3093", 00:13:26.893 "is_configured": true, 00:13:26.893 "data_offset": 2048, 00:13:26.893 "data_size": 63488 00:13:26.893 } 00:13:26.893 ] 00:13:26.893 }' 00:13:26.893 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.893 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.152 [2024-11-27 04:35:14.749839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.152 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.410 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.410 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.410 "name": "Existed_Raid", 00:13:27.410 "uuid": "bf39684b-e35e-4842-ac47-466a7986e420", 00:13:27.410 "strip_size_kb": 64, 00:13:27.410 "state": "configuring", 00:13:27.410 "raid_level": "raid0", 00:13:27.410 "superblock": true, 00:13:27.410 "num_base_bdevs": 3, 00:13:27.410 "num_base_bdevs_discovered": 1, 00:13:27.410 "num_base_bdevs_operational": 3, 00:13:27.410 "base_bdevs_list": [ 00:13:27.410 { 00:13:27.410 "name": "BaseBdev1", 00:13:27.410 "uuid": "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe", 00:13:27.410 "is_configured": true, 00:13:27.410 "data_offset": 2048, 00:13:27.410 "data_size": 63488 00:13:27.410 }, 00:13:27.410 { 00:13:27.410 "name": null, 00:13:27.410 "uuid": "91a88d78-578f-40c8-9bf3-6de14816fc80", 00:13:27.410 "is_configured": false, 00:13:27.410 "data_offset": 0, 00:13:27.410 "data_size": 63488 00:13:27.410 }, 00:13:27.410 { 00:13:27.410 "name": null, 00:13:27.410 "uuid": "562061a6-54c2-428d-a912-d140d67d3093", 00:13:27.410 "is_configured": false, 00:13:27.410 "data_offset": 0, 00:13:27.410 "data_size": 63488 00:13:27.410 } 00:13:27.410 ] 00:13:27.410 }' 00:13:27.410 04:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.410 04:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.671 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.671 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.671 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.671 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:27.671 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.934 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:27.934 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:27.934 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.934 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.934 [2024-11-27 04:35:15.310017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.935 "name": "Existed_Raid", 00:13:27.935 "uuid": "bf39684b-e35e-4842-ac47-466a7986e420", 00:13:27.935 "strip_size_kb": 64, 00:13:27.935 "state": "configuring", 00:13:27.935 "raid_level": "raid0", 00:13:27.935 "superblock": true, 00:13:27.935 "num_base_bdevs": 3, 00:13:27.935 "num_base_bdevs_discovered": 2, 00:13:27.935 "num_base_bdevs_operational": 3, 00:13:27.935 "base_bdevs_list": [ 00:13:27.935 { 00:13:27.935 "name": "BaseBdev1", 00:13:27.935 "uuid": "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe", 00:13:27.935 "is_configured": true, 00:13:27.935 "data_offset": 2048, 00:13:27.935 "data_size": 63488 00:13:27.935 }, 00:13:27.935 { 00:13:27.935 "name": null, 00:13:27.935 "uuid": "91a88d78-578f-40c8-9bf3-6de14816fc80", 00:13:27.935 "is_configured": false, 00:13:27.935 "data_offset": 0, 00:13:27.935 "data_size": 63488 00:13:27.935 }, 00:13:27.935 { 00:13:27.935 "name": "BaseBdev3", 00:13:27.935 "uuid": "562061a6-54c2-428d-a912-d140d67d3093", 00:13:27.935 "is_configured": true, 00:13:27.935 "data_offset": 2048, 00:13:27.935 "data_size": 63488 00:13:27.935 } 00:13:27.935 ] 00:13:27.935 }' 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.935 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.541 [2024-11-27 04:35:15.882208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.541 04:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.541 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.541 "name": "Existed_Raid", 00:13:28.541 "uuid": "bf39684b-e35e-4842-ac47-466a7986e420", 00:13:28.541 "strip_size_kb": 64, 00:13:28.541 "state": "configuring", 00:13:28.541 "raid_level": "raid0", 00:13:28.541 "superblock": true, 00:13:28.541 "num_base_bdevs": 3, 00:13:28.541 "num_base_bdevs_discovered": 1, 00:13:28.541 "num_base_bdevs_operational": 3, 00:13:28.541 "base_bdevs_list": [ 00:13:28.541 { 00:13:28.541 "name": null, 00:13:28.541 "uuid": "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe", 00:13:28.541 "is_configured": false, 00:13:28.541 "data_offset": 0, 00:13:28.541 "data_size": 63488 00:13:28.541 }, 00:13:28.541 { 00:13:28.541 "name": null, 00:13:28.541 "uuid": "91a88d78-578f-40c8-9bf3-6de14816fc80", 00:13:28.541 "is_configured": false, 00:13:28.541 "data_offset": 0, 00:13:28.541 "data_size": 63488 00:13:28.541 }, 00:13:28.541 { 00:13:28.541 "name": "BaseBdev3", 00:13:28.541 "uuid": "562061a6-54c2-428d-a912-d140d67d3093", 00:13:28.541 "is_configured": true, 00:13:28.541 "data_offset": 2048, 00:13:28.541 "data_size": 63488 00:13:28.541 } 00:13:28.541 ] 00:13:28.541 }' 00:13:28.541 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.541 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.109 [2024-11-27 04:35:16.535057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.109 "name": "Existed_Raid", 00:13:29.109 "uuid": "bf39684b-e35e-4842-ac47-466a7986e420", 00:13:29.109 "strip_size_kb": 64, 00:13:29.109 "state": "configuring", 00:13:29.109 "raid_level": "raid0", 00:13:29.109 "superblock": true, 00:13:29.109 "num_base_bdevs": 3, 00:13:29.109 "num_base_bdevs_discovered": 2, 00:13:29.109 "num_base_bdevs_operational": 3, 00:13:29.109 "base_bdevs_list": [ 00:13:29.109 { 00:13:29.109 "name": null, 00:13:29.109 "uuid": "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe", 00:13:29.109 "is_configured": false, 00:13:29.109 "data_offset": 0, 00:13:29.109 "data_size": 63488 00:13:29.109 }, 00:13:29.109 { 00:13:29.109 "name": "BaseBdev2", 00:13:29.109 "uuid": "91a88d78-578f-40c8-9bf3-6de14816fc80", 00:13:29.109 "is_configured": true, 00:13:29.109 "data_offset": 2048, 00:13:29.109 "data_size": 63488 00:13:29.109 }, 00:13:29.109 { 00:13:29.109 "name": "BaseBdev3", 00:13:29.109 "uuid": "562061a6-54c2-428d-a912-d140d67d3093", 00:13:29.109 "is_configured": true, 00:13:29.109 "data_offset": 2048, 00:13:29.109 "data_size": 63488 00:13:29.109 } 00:13:29.109 ] 00:13:29.109 }' 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.109 04:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1bc52eaf-31fd-460c-b7d5-d0314d3c84fe 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 [2024-11-27 04:35:17.245384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:29.676 [2024-11-27 04:35:17.245679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:29.676 [2024-11-27 04:35:17.245704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:29.676 [2024-11-27 04:35:17.246090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:29.676 NewBaseBdev 00:13:29.676 [2024-11-27 04:35:17.246281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:29.676 [2024-11-27 04:35:17.246299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:29.676 [2024-11-27 04:35:17.246468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.676 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.677 [ 00:13:29.677 { 00:13:29.677 "name": "NewBaseBdev", 00:13:29.677 "aliases": [ 00:13:29.677 "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe" 00:13:29.677 ], 00:13:29.677 "product_name": "Malloc disk", 00:13:29.677 "block_size": 512, 00:13:29.677 "num_blocks": 65536, 00:13:29.677 "uuid": "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe", 00:13:29.677 "assigned_rate_limits": { 00:13:29.677 "rw_ios_per_sec": 0, 00:13:29.677 "rw_mbytes_per_sec": 0, 00:13:29.677 "r_mbytes_per_sec": 0, 00:13:29.677 "w_mbytes_per_sec": 0 00:13:29.677 }, 00:13:29.677 "claimed": true, 00:13:29.677 "claim_type": "exclusive_write", 00:13:29.677 "zoned": false, 00:13:29.677 "supported_io_types": { 00:13:29.677 "read": true, 00:13:29.677 "write": true, 00:13:29.677 "unmap": true, 00:13:29.677 "flush": true, 00:13:29.677 "reset": true, 00:13:29.677 "nvme_admin": false, 00:13:29.677 "nvme_io": false, 00:13:29.677 "nvme_io_md": false, 00:13:29.677 "write_zeroes": true, 00:13:29.677 "zcopy": true, 00:13:29.677 "get_zone_info": false, 00:13:29.677 "zone_management": false, 00:13:29.677 "zone_append": false, 00:13:29.677 "compare": false, 00:13:29.677 "compare_and_write": false, 00:13:29.677 "abort": true, 00:13:29.677 "seek_hole": false, 00:13:29.677 "seek_data": false, 00:13:29.677 "copy": true, 00:13:29.677 "nvme_iov_md": false 00:13:29.677 }, 00:13:29.677 "memory_domains": [ 00:13:29.677 { 00:13:29.677 "dma_device_id": "system", 00:13:29.677 "dma_device_type": 1 00:13:29.677 }, 00:13:29.677 { 00:13:29.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.677 "dma_device_type": 2 00:13:29.677 } 00:13:29.677 ], 00:13:29.677 "driver_specific": {} 00:13:29.677 } 00:13:29.677 ] 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.677 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.935 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.935 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.935 "name": "Existed_Raid", 00:13:29.935 "uuid": "bf39684b-e35e-4842-ac47-466a7986e420", 00:13:29.935 "strip_size_kb": 64, 00:13:29.935 "state": "online", 00:13:29.935 "raid_level": "raid0", 00:13:29.935 "superblock": true, 00:13:29.935 "num_base_bdevs": 3, 00:13:29.935 "num_base_bdevs_discovered": 3, 00:13:29.935 "num_base_bdevs_operational": 3, 00:13:29.935 "base_bdevs_list": [ 00:13:29.935 { 00:13:29.935 "name": "NewBaseBdev", 00:13:29.935 "uuid": "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe", 00:13:29.935 "is_configured": true, 00:13:29.935 "data_offset": 2048, 00:13:29.935 "data_size": 63488 00:13:29.935 }, 00:13:29.935 { 00:13:29.935 "name": "BaseBdev2", 00:13:29.935 "uuid": "91a88d78-578f-40c8-9bf3-6de14816fc80", 00:13:29.935 "is_configured": true, 00:13:29.935 "data_offset": 2048, 00:13:29.935 "data_size": 63488 00:13:29.935 }, 00:13:29.935 { 00:13:29.935 "name": "BaseBdev3", 00:13:29.935 "uuid": "562061a6-54c2-428d-a912-d140d67d3093", 00:13:29.935 "is_configured": true, 00:13:29.935 "data_offset": 2048, 00:13:29.935 "data_size": 63488 00:13:29.935 } 00:13:29.935 ] 00:13:29.935 }' 00:13:29.935 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.935 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.193 [2024-11-27 04:35:17.790033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.193 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.451 "name": "Existed_Raid", 00:13:30.451 "aliases": [ 00:13:30.451 "bf39684b-e35e-4842-ac47-466a7986e420" 00:13:30.451 ], 00:13:30.451 "product_name": "Raid Volume", 00:13:30.451 "block_size": 512, 00:13:30.451 "num_blocks": 190464, 00:13:30.451 "uuid": "bf39684b-e35e-4842-ac47-466a7986e420", 00:13:30.451 "assigned_rate_limits": { 00:13:30.451 "rw_ios_per_sec": 0, 00:13:30.451 "rw_mbytes_per_sec": 0, 00:13:30.451 "r_mbytes_per_sec": 0, 00:13:30.451 "w_mbytes_per_sec": 0 00:13:30.451 }, 00:13:30.451 "claimed": false, 00:13:30.451 "zoned": false, 00:13:30.451 "supported_io_types": { 00:13:30.451 "read": true, 00:13:30.451 "write": true, 00:13:30.451 "unmap": true, 00:13:30.451 "flush": true, 00:13:30.451 "reset": true, 00:13:30.451 "nvme_admin": false, 00:13:30.451 "nvme_io": false, 00:13:30.451 "nvme_io_md": false, 00:13:30.451 "write_zeroes": true, 00:13:30.451 "zcopy": false, 00:13:30.451 "get_zone_info": false, 00:13:30.451 "zone_management": false, 00:13:30.451 "zone_append": false, 00:13:30.451 "compare": false, 00:13:30.451 "compare_and_write": false, 00:13:30.451 "abort": false, 00:13:30.451 "seek_hole": false, 00:13:30.451 "seek_data": false, 00:13:30.451 "copy": false, 00:13:30.451 "nvme_iov_md": false 00:13:30.451 }, 00:13:30.451 "memory_domains": [ 00:13:30.451 { 00:13:30.451 "dma_device_id": "system", 00:13:30.451 "dma_device_type": 1 00:13:30.451 }, 00:13:30.451 { 00:13:30.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.451 "dma_device_type": 2 00:13:30.451 }, 00:13:30.451 { 00:13:30.451 "dma_device_id": "system", 00:13:30.451 "dma_device_type": 1 00:13:30.451 }, 00:13:30.451 { 00:13:30.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.451 "dma_device_type": 2 00:13:30.451 }, 00:13:30.451 { 00:13:30.451 "dma_device_id": "system", 00:13:30.451 "dma_device_type": 1 00:13:30.451 }, 00:13:30.451 { 00:13:30.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.451 "dma_device_type": 2 00:13:30.451 } 00:13:30.451 ], 00:13:30.451 "driver_specific": { 00:13:30.451 "raid": { 00:13:30.451 "uuid": "bf39684b-e35e-4842-ac47-466a7986e420", 00:13:30.451 "strip_size_kb": 64, 00:13:30.451 "state": "online", 00:13:30.451 "raid_level": "raid0", 00:13:30.451 "superblock": true, 00:13:30.451 "num_base_bdevs": 3, 00:13:30.451 "num_base_bdevs_discovered": 3, 00:13:30.451 "num_base_bdevs_operational": 3, 00:13:30.451 "base_bdevs_list": [ 00:13:30.451 { 00:13:30.451 "name": "NewBaseBdev", 00:13:30.451 "uuid": "1bc52eaf-31fd-460c-b7d5-d0314d3c84fe", 00:13:30.451 "is_configured": true, 00:13:30.451 "data_offset": 2048, 00:13:30.451 "data_size": 63488 00:13:30.451 }, 00:13:30.451 { 00:13:30.451 "name": "BaseBdev2", 00:13:30.451 "uuid": "91a88d78-578f-40c8-9bf3-6de14816fc80", 00:13:30.451 "is_configured": true, 00:13:30.451 "data_offset": 2048, 00:13:30.451 "data_size": 63488 00:13:30.451 }, 00:13:30.451 { 00:13:30.451 "name": "BaseBdev3", 00:13:30.451 "uuid": "562061a6-54c2-428d-a912-d140d67d3093", 00:13:30.451 "is_configured": true, 00:13:30.451 "data_offset": 2048, 00:13:30.451 "data_size": 63488 00:13:30.451 } 00:13:30.451 ] 00:13:30.451 } 00:13:30.451 } 00:13:30.451 }' 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:30.451 BaseBdev2 00:13:30.451 BaseBdev3' 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.451 04:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.451 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.451 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:30.451 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.451 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.451 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.451 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.451 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.451 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.451 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:30.451 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.452 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.452 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.452 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.710 [2024-11-27 04:35:18.125666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.710 [2024-11-27 04:35:18.125701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.710 [2024-11-27 04:35:18.125852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.710 [2024-11-27 04:35:18.125942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.710 [2024-11-27 04:35:18.125963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64576 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64576 ']' 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64576 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64576 00:13:30.710 killing process with pid 64576 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64576' 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64576 00:13:30.710 [2024-11-27 04:35:18.162811] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.710 04:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64576 00:13:30.969 [2024-11-27 04:35:18.433696] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.902 04:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:31.902 00:13:31.902 real 0m11.720s 00:13:31.902 user 0m19.462s 00:13:31.902 sys 0m1.565s 00:13:31.902 ************************************ 00:13:31.902 END TEST raid_state_function_test_sb 00:13:31.902 ************************************ 00:13:31.902 04:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.902 04:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.160 04:35:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:13:32.160 04:35:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:32.160 04:35:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.160 04:35:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.160 ************************************ 00:13:32.160 START TEST raid_superblock_test 00:13:32.160 ************************************ 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65207 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65207 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65207 ']' 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.160 04:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.160 [2024-11-27 04:35:19.649860] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:32.160 [2024-11-27 04:35:19.650044] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65207 ] 00:13:32.419 [2024-11-27 04:35:19.827423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.419 [2024-11-27 04:35:19.958513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.678 [2024-11-27 04:35:20.166055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.678 [2024-11-27 04:35:20.166138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.245 malloc1 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.245 [2024-11-27 04:35:20.654216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:33.245 [2024-11-27 04:35:20.654295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.245 [2024-11-27 04:35:20.654327] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:33.245 [2024-11-27 04:35:20.654344] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.245 [2024-11-27 04:35:20.657112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.245 [2024-11-27 04:35:20.657160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:33.245 pt1 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.245 malloc2 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.245 [2024-11-27 04:35:20.710447] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:33.245 [2024-11-27 04:35:20.710520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.245 [2024-11-27 04:35:20.710576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:33.245 [2024-11-27 04:35:20.710595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.245 [2024-11-27 04:35:20.713439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.245 [2024-11-27 04:35:20.713487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:33.245 pt2 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:33.245 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.246 malloc3 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.246 [2024-11-27 04:35:20.773465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:33.246 [2024-11-27 04:35:20.773533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.246 [2024-11-27 04:35:20.773566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:33.246 [2024-11-27 04:35:20.773581] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.246 [2024-11-27 04:35:20.776396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.246 [2024-11-27 04:35:20.776444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:33.246 pt3 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.246 [2024-11-27 04:35:20.785530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:33.246 [2024-11-27 04:35:20.787955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:33.246 [2024-11-27 04:35:20.788059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:33.246 [2024-11-27 04:35:20.788267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:33.246 [2024-11-27 04:35:20.788302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:33.246 [2024-11-27 04:35:20.788615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:33.246 [2024-11-27 04:35:20.788862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:33.246 [2024-11-27 04:35:20.788888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:33.246 [2024-11-27 04:35:20.789073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.246 "name": "raid_bdev1", 00:13:33.246 "uuid": "5a71e065-cf6f-4c07-89ae-29c83bd72891", 00:13:33.246 "strip_size_kb": 64, 00:13:33.246 "state": "online", 00:13:33.246 "raid_level": "raid0", 00:13:33.246 "superblock": true, 00:13:33.246 "num_base_bdevs": 3, 00:13:33.246 "num_base_bdevs_discovered": 3, 00:13:33.246 "num_base_bdevs_operational": 3, 00:13:33.246 "base_bdevs_list": [ 00:13:33.246 { 00:13:33.246 "name": "pt1", 00:13:33.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.246 "is_configured": true, 00:13:33.246 "data_offset": 2048, 00:13:33.246 "data_size": 63488 00:13:33.246 }, 00:13:33.246 { 00:13:33.246 "name": "pt2", 00:13:33.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.246 "is_configured": true, 00:13:33.246 "data_offset": 2048, 00:13:33.246 "data_size": 63488 00:13:33.246 }, 00:13:33.246 { 00:13:33.246 "name": "pt3", 00:13:33.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.246 "is_configured": true, 00:13:33.246 "data_offset": 2048, 00:13:33.246 "data_size": 63488 00:13:33.246 } 00:13:33.246 ] 00:13:33.246 }' 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.246 04:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:33.813 [2024-11-27 04:35:21.294084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.813 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:33.813 "name": "raid_bdev1", 00:13:33.813 "aliases": [ 00:13:33.813 "5a71e065-cf6f-4c07-89ae-29c83bd72891" 00:13:33.813 ], 00:13:33.813 "product_name": "Raid Volume", 00:13:33.813 "block_size": 512, 00:13:33.813 "num_blocks": 190464, 00:13:33.813 "uuid": "5a71e065-cf6f-4c07-89ae-29c83bd72891", 00:13:33.813 "assigned_rate_limits": { 00:13:33.813 "rw_ios_per_sec": 0, 00:13:33.813 "rw_mbytes_per_sec": 0, 00:13:33.813 "r_mbytes_per_sec": 0, 00:13:33.813 "w_mbytes_per_sec": 0 00:13:33.813 }, 00:13:33.813 "claimed": false, 00:13:33.813 "zoned": false, 00:13:33.813 "supported_io_types": { 00:13:33.813 "read": true, 00:13:33.813 "write": true, 00:13:33.813 "unmap": true, 00:13:33.813 "flush": true, 00:13:33.813 "reset": true, 00:13:33.813 "nvme_admin": false, 00:13:33.813 "nvme_io": false, 00:13:33.813 "nvme_io_md": false, 00:13:33.813 "write_zeroes": true, 00:13:33.813 "zcopy": false, 00:13:33.813 "get_zone_info": false, 00:13:33.813 "zone_management": false, 00:13:33.813 "zone_append": false, 00:13:33.813 "compare": false, 00:13:33.813 "compare_and_write": false, 00:13:33.813 "abort": false, 00:13:33.813 "seek_hole": false, 00:13:33.813 "seek_data": false, 00:13:33.813 "copy": false, 00:13:33.814 "nvme_iov_md": false 00:13:33.814 }, 00:13:33.814 "memory_domains": [ 00:13:33.814 { 00:13:33.814 "dma_device_id": "system", 00:13:33.814 "dma_device_type": 1 00:13:33.814 }, 00:13:33.814 { 00:13:33.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.814 "dma_device_type": 2 00:13:33.814 }, 00:13:33.814 { 00:13:33.814 "dma_device_id": "system", 00:13:33.814 "dma_device_type": 1 00:13:33.814 }, 00:13:33.814 { 00:13:33.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.814 "dma_device_type": 2 00:13:33.814 }, 00:13:33.814 { 00:13:33.814 "dma_device_id": "system", 00:13:33.814 "dma_device_type": 1 00:13:33.814 }, 00:13:33.814 { 00:13:33.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.814 "dma_device_type": 2 00:13:33.814 } 00:13:33.814 ], 00:13:33.814 "driver_specific": { 00:13:33.814 "raid": { 00:13:33.814 "uuid": "5a71e065-cf6f-4c07-89ae-29c83bd72891", 00:13:33.814 "strip_size_kb": 64, 00:13:33.814 "state": "online", 00:13:33.814 "raid_level": "raid0", 00:13:33.814 "superblock": true, 00:13:33.814 "num_base_bdevs": 3, 00:13:33.814 "num_base_bdevs_discovered": 3, 00:13:33.814 "num_base_bdevs_operational": 3, 00:13:33.814 "base_bdevs_list": [ 00:13:33.814 { 00:13:33.814 "name": "pt1", 00:13:33.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.814 "is_configured": true, 00:13:33.814 "data_offset": 2048, 00:13:33.814 "data_size": 63488 00:13:33.814 }, 00:13:33.814 { 00:13:33.814 "name": "pt2", 00:13:33.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.814 "is_configured": true, 00:13:33.814 "data_offset": 2048, 00:13:33.814 "data_size": 63488 00:13:33.814 }, 00:13:33.814 { 00:13:33.814 "name": "pt3", 00:13:33.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.814 "is_configured": true, 00:13:33.814 "data_offset": 2048, 00:13:33.814 "data_size": 63488 00:13:33.814 } 00:13:33.814 ] 00:13:33.814 } 00:13:33.814 } 00:13:33.814 }' 00:13:33.814 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:33.814 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:33.814 pt2 00:13:33.814 pt3' 00:13:33.814 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:34.073 [2024-11-27 04:35:21.626115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5a71e065-cf6f-4c07-89ae-29c83bd72891 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5a71e065-cf6f-4c07-89ae-29c83bd72891 ']' 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.073 [2024-11-27 04:35:21.669691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.073 [2024-11-27 04:35:21.669729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.073 [2024-11-27 04:35:21.669888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.073 [2024-11-27 04:35:21.669983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.073 [2024-11-27 04:35:21.670005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.073 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:34.332 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.333 [2024-11-27 04:35:21.809802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:34.333 [2024-11-27 04:35:21.812285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:34.333 [2024-11-27 04:35:21.812364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:34.333 [2024-11-27 04:35:21.812440] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:34.333 [2024-11-27 04:35:21.812511] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:34.333 [2024-11-27 04:35:21.812546] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:34.333 [2024-11-27 04:35:21.812573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.333 [2024-11-27 04:35:21.812590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:34.333 request: 00:13:34.333 { 00:13:34.333 "name": "raid_bdev1", 00:13:34.333 "raid_level": "raid0", 00:13:34.333 "base_bdevs": [ 00:13:34.333 "malloc1", 00:13:34.333 "malloc2", 00:13:34.333 "malloc3" 00:13:34.333 ], 00:13:34.333 "strip_size_kb": 64, 00:13:34.333 "superblock": false, 00:13:34.333 "method": "bdev_raid_create", 00:13:34.333 "req_id": 1 00:13:34.333 } 00:13:34.333 Got JSON-RPC error response 00:13:34.333 response: 00:13:34.333 { 00:13:34.333 "code": -17, 00:13:34.333 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:34.333 } 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.333 [2024-11-27 04:35:21.869713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:34.333 [2024-11-27 04:35:21.869809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.333 [2024-11-27 04:35:21.869855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:34.333 [2024-11-27 04:35:21.869875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.333 [2024-11-27 04:35:21.872659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.333 [2024-11-27 04:35:21.872706] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:34.333 [2024-11-27 04:35:21.872820] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:34.333 [2024-11-27 04:35:21.872886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:34.333 pt1 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.333 "name": "raid_bdev1", 00:13:34.333 "uuid": "5a71e065-cf6f-4c07-89ae-29c83bd72891", 00:13:34.333 "strip_size_kb": 64, 00:13:34.333 "state": "configuring", 00:13:34.333 "raid_level": "raid0", 00:13:34.333 "superblock": true, 00:13:34.333 "num_base_bdevs": 3, 00:13:34.333 "num_base_bdevs_discovered": 1, 00:13:34.333 "num_base_bdevs_operational": 3, 00:13:34.333 "base_bdevs_list": [ 00:13:34.333 { 00:13:34.333 "name": "pt1", 00:13:34.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.333 "is_configured": true, 00:13:34.333 "data_offset": 2048, 00:13:34.333 "data_size": 63488 00:13:34.333 }, 00:13:34.333 { 00:13:34.333 "name": null, 00:13:34.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.333 "is_configured": false, 00:13:34.333 "data_offset": 2048, 00:13:34.333 "data_size": 63488 00:13:34.333 }, 00:13:34.333 { 00:13:34.333 "name": null, 00:13:34.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.333 "is_configured": false, 00:13:34.333 "data_offset": 2048, 00:13:34.333 "data_size": 63488 00:13:34.333 } 00:13:34.333 ] 00:13:34.333 }' 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.333 04:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.899 [2024-11-27 04:35:22.393925] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:34.899 [2024-11-27 04:35:22.394014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.899 [2024-11-27 04:35:22.394054] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:34.899 [2024-11-27 04:35:22.394069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.899 [2024-11-27 04:35:22.394619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.899 [2024-11-27 04:35:22.394656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:34.899 [2024-11-27 04:35:22.394766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:34.899 [2024-11-27 04:35:22.394823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:34.899 pt2 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.899 [2024-11-27 04:35:22.401923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.899 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.899 "name": "raid_bdev1", 00:13:34.899 "uuid": "5a71e065-cf6f-4c07-89ae-29c83bd72891", 00:13:34.899 "strip_size_kb": 64, 00:13:34.899 "state": "configuring", 00:13:34.899 "raid_level": "raid0", 00:13:34.899 "superblock": true, 00:13:34.899 "num_base_bdevs": 3, 00:13:34.899 "num_base_bdevs_discovered": 1, 00:13:34.899 "num_base_bdevs_operational": 3, 00:13:34.899 "base_bdevs_list": [ 00:13:34.899 { 00:13:34.899 "name": "pt1", 00:13:34.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.900 "is_configured": true, 00:13:34.900 "data_offset": 2048, 00:13:34.900 "data_size": 63488 00:13:34.900 }, 00:13:34.900 { 00:13:34.900 "name": null, 00:13:34.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.900 "is_configured": false, 00:13:34.900 "data_offset": 0, 00:13:34.900 "data_size": 63488 00:13:34.900 }, 00:13:34.900 { 00:13:34.900 "name": null, 00:13:34.900 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.900 "is_configured": false, 00:13:34.900 "data_offset": 2048, 00:13:34.900 "data_size": 63488 00:13:34.900 } 00:13:34.900 ] 00:13:34.900 }' 00:13:34.900 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.900 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.466 [2024-11-27 04:35:22.922072] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:35.466 [2024-11-27 04:35:22.922160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.466 [2024-11-27 04:35:22.922189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:35.466 [2024-11-27 04:35:22.922206] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.466 [2024-11-27 04:35:22.922802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.466 [2024-11-27 04:35:22.922842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:35.466 [2024-11-27 04:35:22.922942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:35.466 [2024-11-27 04:35:22.922981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:35.466 pt2 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.466 [2024-11-27 04:35:22.934047] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:35.466 [2024-11-27 04:35:22.934105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.466 [2024-11-27 04:35:22.934126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:35.466 [2024-11-27 04:35:22.934142] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.466 [2024-11-27 04:35:22.934579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.466 [2024-11-27 04:35:22.934624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:35.466 [2024-11-27 04:35:22.934698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:35.466 [2024-11-27 04:35:22.934730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:35.466 [2024-11-27 04:35:22.934907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:35.466 [2024-11-27 04:35:22.934942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:35.466 [2024-11-27 04:35:22.935255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:35.466 [2024-11-27 04:35:22.935459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:35.466 [2024-11-27 04:35:22.935484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:35.466 [2024-11-27 04:35:22.935649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.466 pt3 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.466 "name": "raid_bdev1", 00:13:35.466 "uuid": "5a71e065-cf6f-4c07-89ae-29c83bd72891", 00:13:35.466 "strip_size_kb": 64, 00:13:35.466 "state": "online", 00:13:35.466 "raid_level": "raid0", 00:13:35.466 "superblock": true, 00:13:35.466 "num_base_bdevs": 3, 00:13:35.466 "num_base_bdevs_discovered": 3, 00:13:35.466 "num_base_bdevs_operational": 3, 00:13:35.466 "base_bdevs_list": [ 00:13:35.466 { 00:13:35.466 "name": "pt1", 00:13:35.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.466 "is_configured": true, 00:13:35.466 "data_offset": 2048, 00:13:35.466 "data_size": 63488 00:13:35.466 }, 00:13:35.466 { 00:13:35.466 "name": "pt2", 00:13:35.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.466 "is_configured": true, 00:13:35.466 "data_offset": 2048, 00:13:35.466 "data_size": 63488 00:13:35.466 }, 00:13:35.466 { 00:13:35.466 "name": "pt3", 00:13:35.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.466 "is_configured": true, 00:13:35.466 "data_offset": 2048, 00:13:35.466 "data_size": 63488 00:13:35.466 } 00:13:35.466 ] 00:13:35.466 }' 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.466 04:35:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:36.033 [2024-11-27 04:35:23.454645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:36.033 "name": "raid_bdev1", 00:13:36.033 "aliases": [ 00:13:36.033 "5a71e065-cf6f-4c07-89ae-29c83bd72891" 00:13:36.033 ], 00:13:36.033 "product_name": "Raid Volume", 00:13:36.033 "block_size": 512, 00:13:36.033 "num_blocks": 190464, 00:13:36.033 "uuid": "5a71e065-cf6f-4c07-89ae-29c83bd72891", 00:13:36.033 "assigned_rate_limits": { 00:13:36.033 "rw_ios_per_sec": 0, 00:13:36.033 "rw_mbytes_per_sec": 0, 00:13:36.033 "r_mbytes_per_sec": 0, 00:13:36.033 "w_mbytes_per_sec": 0 00:13:36.033 }, 00:13:36.033 "claimed": false, 00:13:36.033 "zoned": false, 00:13:36.033 "supported_io_types": { 00:13:36.033 "read": true, 00:13:36.033 "write": true, 00:13:36.033 "unmap": true, 00:13:36.033 "flush": true, 00:13:36.033 "reset": true, 00:13:36.033 "nvme_admin": false, 00:13:36.033 "nvme_io": false, 00:13:36.033 "nvme_io_md": false, 00:13:36.033 "write_zeroes": true, 00:13:36.033 "zcopy": false, 00:13:36.033 "get_zone_info": false, 00:13:36.033 "zone_management": false, 00:13:36.033 "zone_append": false, 00:13:36.033 "compare": false, 00:13:36.033 "compare_and_write": false, 00:13:36.033 "abort": false, 00:13:36.033 "seek_hole": false, 00:13:36.033 "seek_data": false, 00:13:36.033 "copy": false, 00:13:36.033 "nvme_iov_md": false 00:13:36.033 }, 00:13:36.033 "memory_domains": [ 00:13:36.033 { 00:13:36.033 "dma_device_id": "system", 00:13:36.033 "dma_device_type": 1 00:13:36.033 }, 00:13:36.033 { 00:13:36.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.033 "dma_device_type": 2 00:13:36.033 }, 00:13:36.033 { 00:13:36.033 "dma_device_id": "system", 00:13:36.033 "dma_device_type": 1 00:13:36.033 }, 00:13:36.033 { 00:13:36.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.033 "dma_device_type": 2 00:13:36.033 }, 00:13:36.033 { 00:13:36.033 "dma_device_id": "system", 00:13:36.033 "dma_device_type": 1 00:13:36.033 }, 00:13:36.033 { 00:13:36.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.033 "dma_device_type": 2 00:13:36.033 } 00:13:36.033 ], 00:13:36.033 "driver_specific": { 00:13:36.033 "raid": { 00:13:36.033 "uuid": "5a71e065-cf6f-4c07-89ae-29c83bd72891", 00:13:36.033 "strip_size_kb": 64, 00:13:36.033 "state": "online", 00:13:36.033 "raid_level": "raid0", 00:13:36.033 "superblock": true, 00:13:36.033 "num_base_bdevs": 3, 00:13:36.033 "num_base_bdevs_discovered": 3, 00:13:36.033 "num_base_bdevs_operational": 3, 00:13:36.033 "base_bdevs_list": [ 00:13:36.033 { 00:13:36.033 "name": "pt1", 00:13:36.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.033 "is_configured": true, 00:13:36.033 "data_offset": 2048, 00:13:36.033 "data_size": 63488 00:13:36.033 }, 00:13:36.033 { 00:13:36.033 "name": "pt2", 00:13:36.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.033 "is_configured": true, 00:13:36.033 "data_offset": 2048, 00:13:36.033 "data_size": 63488 00:13:36.033 }, 00:13:36.033 { 00:13:36.033 "name": "pt3", 00:13:36.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.033 "is_configured": true, 00:13:36.033 "data_offset": 2048, 00:13:36.033 "data_size": 63488 00:13:36.033 } 00:13:36.033 ] 00:13:36.033 } 00:13:36.033 } 00:13:36.033 }' 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:36.033 pt2 00:13:36.033 pt3' 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.033 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:36.292 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.293 [2024-11-27 04:35:23.786687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5a71e065-cf6f-4c07-89ae-29c83bd72891 '!=' 5a71e065-cf6f-4c07-89ae-29c83bd72891 ']' 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65207 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65207 ']' 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65207 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65207 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.293 killing process with pid 65207 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65207' 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65207 00:13:36.293 [2024-11-27 04:35:23.861207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.293 [2024-11-27 04:35:23.861335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.293 04:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65207 00:13:36.293 [2024-11-27 04:35:23.861416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.293 [2024-11-27 04:35:23.861437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:36.551 [2024-11-27 04:35:24.141271] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.927 04:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:37.927 00:13:37.927 real 0m5.652s 00:13:37.927 user 0m8.503s 00:13:37.927 sys 0m0.823s 00:13:37.927 04:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.927 04:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.927 ************************************ 00:13:37.927 END TEST raid_superblock_test 00:13:37.927 ************************************ 00:13:37.927 04:35:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:13:37.927 04:35:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:37.927 04:35:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.927 04:35:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.927 ************************************ 00:13:37.927 START TEST raid_read_error_test 00:13:37.927 ************************************ 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zfM0jTj22K 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65466 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65466 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65466 ']' 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.927 04:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.927 [2024-11-27 04:35:25.370637] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:37.927 [2024-11-27 04:35:25.370806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65466 ] 00:13:37.927 [2024-11-27 04:35:25.538081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.186 [2024-11-27 04:35:25.669202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.445 [2024-11-27 04:35:25.874692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.445 [2024-11-27 04:35:25.874762] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.703 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.703 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:38.703 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:38.703 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:38.703 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.703 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 BaseBdev1_malloc 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 true 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 [2024-11-27 04:35:26.379540] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:38.961 [2024-11-27 04:35:26.379628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.961 [2024-11-27 04:35:26.379664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:38.961 [2024-11-27 04:35:26.379682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.961 [2024-11-27 04:35:26.382651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.961 [2024-11-27 04:35:26.382848] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:38.961 BaseBdev1 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 BaseBdev2_malloc 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 true 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 [2024-11-27 04:35:26.440273] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:38.961 [2024-11-27 04:35:26.440346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.961 [2024-11-27 04:35:26.440374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:38.961 [2024-11-27 04:35:26.440391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.961 [2024-11-27 04:35:26.443341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.961 [2024-11-27 04:35:26.443399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:38.961 BaseBdev2 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 BaseBdev3_malloc 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 true 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 [2024-11-27 04:35:26.515714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:38.961 [2024-11-27 04:35:26.515934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.961 [2024-11-27 04:35:26.515973] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:38.961 [2024-11-27 04:35:26.515994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.961 [2024-11-27 04:35:26.518908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.961 [2024-11-27 04:35:26.518963] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:38.961 BaseBdev3 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 [2024-11-27 04:35:26.523841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.961 [2024-11-27 04:35:26.526450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.961 [2024-11-27 04:35:26.526558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.961 [2024-11-27 04:35:26.526849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:38.961 [2024-11-27 04:35:26.526873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:38.961 [2024-11-27 04:35:26.527210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:38.961 [2024-11-27 04:35:26.527437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:38.961 [2024-11-27 04:35:26.527471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:38.961 [2024-11-27 04:35:26.527718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.220 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.220 "name": "raid_bdev1", 00:13:39.220 "uuid": "7e7cc11c-c23d-48ad-a400-3539d7af0a7e", 00:13:39.220 "strip_size_kb": 64, 00:13:39.220 "state": "online", 00:13:39.220 "raid_level": "raid0", 00:13:39.220 "superblock": true, 00:13:39.220 "num_base_bdevs": 3, 00:13:39.220 "num_base_bdevs_discovered": 3, 00:13:39.220 "num_base_bdevs_operational": 3, 00:13:39.220 "base_bdevs_list": [ 00:13:39.220 { 00:13:39.220 "name": "BaseBdev1", 00:13:39.220 "uuid": "0acb6cb9-6c0d-59fa-9232-50ae3a50c213", 00:13:39.220 "is_configured": true, 00:13:39.220 "data_offset": 2048, 00:13:39.220 "data_size": 63488 00:13:39.220 }, 00:13:39.220 { 00:13:39.220 "name": "BaseBdev2", 00:13:39.220 "uuid": "fa78df35-1655-5579-9753-fb7d9f5aab8c", 00:13:39.220 "is_configured": true, 00:13:39.220 "data_offset": 2048, 00:13:39.220 "data_size": 63488 00:13:39.220 }, 00:13:39.220 { 00:13:39.220 "name": "BaseBdev3", 00:13:39.220 "uuid": "3d8da068-f0dc-57e7-a88f-1122b13c170b", 00:13:39.220 "is_configured": true, 00:13:39.220 "data_offset": 2048, 00:13:39.220 "data_size": 63488 00:13:39.220 } 00:13:39.220 ] 00:13:39.220 }' 00:13:39.220 04:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.220 04:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.479 04:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:39.479 04:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:39.737 [2024-11-27 04:35:27.153409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:40.684 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:40.684 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.685 "name": "raid_bdev1", 00:13:40.685 "uuid": "7e7cc11c-c23d-48ad-a400-3539d7af0a7e", 00:13:40.685 "strip_size_kb": 64, 00:13:40.685 "state": "online", 00:13:40.685 "raid_level": "raid0", 00:13:40.685 "superblock": true, 00:13:40.685 "num_base_bdevs": 3, 00:13:40.685 "num_base_bdevs_discovered": 3, 00:13:40.685 "num_base_bdevs_operational": 3, 00:13:40.685 "base_bdevs_list": [ 00:13:40.685 { 00:13:40.685 "name": "BaseBdev1", 00:13:40.685 "uuid": "0acb6cb9-6c0d-59fa-9232-50ae3a50c213", 00:13:40.685 "is_configured": true, 00:13:40.685 "data_offset": 2048, 00:13:40.685 "data_size": 63488 00:13:40.685 }, 00:13:40.685 { 00:13:40.685 "name": "BaseBdev2", 00:13:40.685 "uuid": "fa78df35-1655-5579-9753-fb7d9f5aab8c", 00:13:40.685 "is_configured": true, 00:13:40.685 "data_offset": 2048, 00:13:40.685 "data_size": 63488 00:13:40.685 }, 00:13:40.685 { 00:13:40.685 "name": "BaseBdev3", 00:13:40.685 "uuid": "3d8da068-f0dc-57e7-a88f-1122b13c170b", 00:13:40.685 "is_configured": true, 00:13:40.685 "data_offset": 2048, 00:13:40.685 "data_size": 63488 00:13:40.685 } 00:13:40.685 ] 00:13:40.685 }' 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.685 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.252 [2024-11-27 04:35:28.589363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.252 [2024-11-27 04:35:28.589399] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.252 [2024-11-27 04:35:28.592849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.252 [2024-11-27 04:35:28.593047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.252 [2024-11-27 04:35:28.593121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.252 [2024-11-27 04:35:28.593138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.252 { 00:13:41.252 "results": [ 00:13:41.252 { 00:13:41.252 "job": "raid_bdev1", 00:13:41.252 "core_mask": "0x1", 00:13:41.252 "workload": "randrw", 00:13:41.252 "percentage": 50, 00:13:41.252 "status": "finished", 00:13:41.252 "queue_depth": 1, 00:13:41.252 "io_size": 131072, 00:13:41.252 "runtime": 1.433439, 00:13:41.252 "iops": 10239.012612326022, 00:13:41.252 "mibps": 1279.8765765407527, 00:13:41.252 "io_failed": 1, 00:13:41.252 "io_timeout": 0, 00:13:41.252 "avg_latency_us": 136.2618023263016, 00:13:41.252 "min_latency_us": 42.123636363636365, 00:13:41.252 "max_latency_us": 1876.7127272727273 00:13:41.252 } 00:13:41.252 ], 00:13:41.252 "core_count": 1 00:13:41.252 } 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65466 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65466 ']' 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65466 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65466 00:13:41.252 killing process with pid 65466 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65466' 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65466 00:13:41.252 [2024-11-27 04:35:28.626322] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.252 04:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65466 00:13:41.252 [2024-11-27 04:35:28.836269] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.624 04:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zfM0jTj22K 00:13:42.624 04:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:42.624 04:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:42.624 04:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:42.624 04:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:42.624 04:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:42.624 04:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:42.624 04:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:42.624 00:13:42.624 real 0m4.710s 00:13:42.624 user 0m5.792s 00:13:42.624 sys 0m0.612s 00:13:42.624 04:35:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.624 ************************************ 00:13:42.624 END TEST raid_read_error_test 00:13:42.624 ************************************ 00:13:42.624 04:35:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.624 04:35:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:13:42.624 04:35:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:42.624 04:35:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.624 04:35:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.624 ************************************ 00:13:42.624 START TEST raid_write_error_test 00:13:42.624 ************************************ 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NnoCYUvzmi 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65606 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65606 00:13:42.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65606 ']' 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.624 04:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.624 [2024-11-27 04:35:30.106163] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:42.624 [2024-11-27 04:35:30.106532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65606 ] 00:13:42.882 [2024-11-27 04:35:30.277347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.882 [2024-11-27 04:35:30.411952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.140 [2024-11-27 04:35:30.618969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.140 [2024-11-27 04:35:30.619191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.707 BaseBdev1_malloc 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.707 true 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.707 [2024-11-27 04:35:31.150882] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:43.707 [2024-11-27 04:35:31.150953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.707 [2024-11-27 04:35:31.150984] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:43.707 [2024-11-27 04:35:31.151002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.707 [2024-11-27 04:35:31.153937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.707 [2024-11-27 04:35:31.154122] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:43.707 BaseBdev1 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:43.707 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.708 BaseBdev2_malloc 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.708 true 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.708 [2024-11-27 04:35:31.211581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:43.708 [2024-11-27 04:35:31.211807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.708 [2024-11-27 04:35:31.211844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:43.708 [2024-11-27 04:35:31.211863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.708 [2024-11-27 04:35:31.214717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.708 [2024-11-27 04:35:31.214785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:43.708 BaseBdev2 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.708 BaseBdev3_malloc 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.708 true 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.708 [2024-11-27 04:35:31.277869] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:43.708 [2024-11-27 04:35:31.278071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.708 [2024-11-27 04:35:31.278144] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:43.708 [2024-11-27 04:35:31.278384] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.708 [2024-11-27 04:35:31.281238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.708 [2024-11-27 04:35:31.281289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:43.708 BaseBdev3 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.708 [2024-11-27 04:35:31.285985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.708 [2024-11-27 04:35:31.288592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.708 [2024-11-27 04:35:31.288855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:43.708 [2024-11-27 04:35:31.289176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:43.708 [2024-11-27 04:35:31.289314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:43.708 [2024-11-27 04:35:31.289674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:43.708 [2024-11-27 04:35:31.289975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:43.708 [2024-11-27 04:35:31.290003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:43.708 [2024-11-27 04:35:31.290241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.708 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.967 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.967 "name": "raid_bdev1", 00:13:43.967 "uuid": "2f9c4623-89a9-48fe-8de0-4f9f4a9ce9fb", 00:13:43.967 "strip_size_kb": 64, 00:13:43.967 "state": "online", 00:13:43.967 "raid_level": "raid0", 00:13:43.967 "superblock": true, 00:13:43.967 "num_base_bdevs": 3, 00:13:43.967 "num_base_bdevs_discovered": 3, 00:13:43.967 "num_base_bdevs_operational": 3, 00:13:43.967 "base_bdevs_list": [ 00:13:43.967 { 00:13:43.967 "name": "BaseBdev1", 00:13:43.967 "uuid": "f88e5ad1-20f3-511b-8876-975ad1298891", 00:13:43.967 "is_configured": true, 00:13:43.967 "data_offset": 2048, 00:13:43.967 "data_size": 63488 00:13:43.967 }, 00:13:43.967 { 00:13:43.967 "name": "BaseBdev2", 00:13:43.967 "uuid": "52210819-a996-5ec8-bd49-19b7eb5b04a9", 00:13:43.967 "is_configured": true, 00:13:43.967 "data_offset": 2048, 00:13:43.967 "data_size": 63488 00:13:43.967 }, 00:13:43.967 { 00:13:43.967 "name": "BaseBdev3", 00:13:43.967 "uuid": "19708672-7377-57ab-b533-db2850fafea0", 00:13:43.967 "is_configured": true, 00:13:43.967 "data_offset": 2048, 00:13:43.967 "data_size": 63488 00:13:43.967 } 00:13:43.967 ] 00:13:43.967 }' 00:13:43.967 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.967 04:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.224 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:44.224 04:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:44.499 [2024-11-27 04:35:31.935828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.434 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.434 "name": "raid_bdev1", 00:13:45.434 "uuid": "2f9c4623-89a9-48fe-8de0-4f9f4a9ce9fb", 00:13:45.434 "strip_size_kb": 64, 00:13:45.434 "state": "online", 00:13:45.434 "raid_level": "raid0", 00:13:45.434 "superblock": true, 00:13:45.434 "num_base_bdevs": 3, 00:13:45.434 "num_base_bdevs_discovered": 3, 00:13:45.434 "num_base_bdevs_operational": 3, 00:13:45.434 "base_bdevs_list": [ 00:13:45.434 { 00:13:45.434 "name": "BaseBdev1", 00:13:45.434 "uuid": "f88e5ad1-20f3-511b-8876-975ad1298891", 00:13:45.434 "is_configured": true, 00:13:45.434 "data_offset": 2048, 00:13:45.434 "data_size": 63488 00:13:45.434 }, 00:13:45.434 { 00:13:45.435 "name": "BaseBdev2", 00:13:45.435 "uuid": "52210819-a996-5ec8-bd49-19b7eb5b04a9", 00:13:45.435 "is_configured": true, 00:13:45.435 "data_offset": 2048, 00:13:45.435 "data_size": 63488 00:13:45.435 }, 00:13:45.435 { 00:13:45.435 "name": "BaseBdev3", 00:13:45.435 "uuid": "19708672-7377-57ab-b533-db2850fafea0", 00:13:45.435 "is_configured": true, 00:13:45.435 "data_offset": 2048, 00:13:45.435 "data_size": 63488 00:13:45.435 } 00:13:45.435 ] 00:13:45.435 }' 00:13:45.435 04:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.435 04:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.999 [2024-11-27 04:35:33.342723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.999 [2024-11-27 04:35:33.342760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.999 [2024-11-27 04:35:33.346345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.999 [2024-11-27 04:35:33.346526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.999 [2024-11-27 04:35:33.346638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.999 [2024-11-27 04:35:33.346914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:45.999 { 00:13:45.999 "results": [ 00:13:45.999 { 00:13:45.999 "job": "raid_bdev1", 00:13:45.999 "core_mask": "0x1", 00:13:45.999 "workload": "randrw", 00:13:45.999 "percentage": 50, 00:13:45.999 "status": "finished", 00:13:45.999 "queue_depth": 1, 00:13:45.999 "io_size": 131072, 00:13:45.999 "runtime": 1.40424, 00:13:45.999 "iops": 10308.066997094515, 00:13:45.999 "mibps": 1288.5083746368143, 00:13:45.999 "io_failed": 1, 00:13:45.999 "io_timeout": 0, 00:13:45.999 "avg_latency_us": 134.8757798487779, 00:13:45.999 "min_latency_us": 27.927272727272726, 00:13:45.999 "max_latency_us": 1809.6872727272728 00:13:45.999 } 00:13:45.999 ], 00:13:45.999 "core_count": 1 00:13:45.999 } 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65606 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65606 ']' 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65606 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65606 00:13:45.999 killing process with pid 65606 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65606' 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65606 00:13:45.999 [2024-11-27 04:35:33.386885] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.999 04:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65606 00:13:45.999 [2024-11-27 04:35:33.594336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.435 04:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NnoCYUvzmi 00:13:47.435 04:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:47.435 04:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:47.435 04:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:47.435 04:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:47.435 04:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:47.435 04:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:47.435 04:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:47.435 ************************************ 00:13:47.435 END TEST raid_write_error_test 00:13:47.435 ************************************ 00:13:47.435 00:13:47.435 real 0m4.708s 00:13:47.435 user 0m5.859s 00:13:47.435 sys 0m0.541s 00:13:47.435 04:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.435 04:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.435 04:35:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:47.435 04:35:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:13:47.435 04:35:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:47.435 04:35:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.435 04:35:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.435 ************************************ 00:13:47.435 START TEST raid_state_function_test 00:13:47.435 ************************************ 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:47.435 Process raid pid: 65755 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65755 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65755' 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65755 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65755 ']' 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.435 04:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.435 [2024-11-27 04:35:34.855860] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:47.435 [2024-11-27 04:35:34.856250] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.435 [2024-11-27 04:35:35.029113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.693 [2024-11-27 04:35:35.163703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.951 [2024-11-27 04:35:35.375366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.951 [2024-11-27 04:35:35.375600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.518 [2024-11-27 04:35:35.920833] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.518 [2024-11-27 04:35:35.921046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.518 [2024-11-27 04:35:35.921216] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.518 [2024-11-27 04:35:35.921361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.518 [2024-11-27 04:35:35.921474] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:48.518 [2024-11-27 04:35:35.921543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.518 "name": "Existed_Raid", 00:13:48.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.518 "strip_size_kb": 64, 00:13:48.518 "state": "configuring", 00:13:48.518 "raid_level": "concat", 00:13:48.518 "superblock": false, 00:13:48.518 "num_base_bdevs": 3, 00:13:48.518 "num_base_bdevs_discovered": 0, 00:13:48.518 "num_base_bdevs_operational": 3, 00:13:48.518 "base_bdevs_list": [ 00:13:48.518 { 00:13:48.518 "name": "BaseBdev1", 00:13:48.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.518 "is_configured": false, 00:13:48.518 "data_offset": 0, 00:13:48.518 "data_size": 0 00:13:48.518 }, 00:13:48.518 { 00:13:48.518 "name": "BaseBdev2", 00:13:48.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.518 "is_configured": false, 00:13:48.518 "data_offset": 0, 00:13:48.518 "data_size": 0 00:13:48.518 }, 00:13:48.518 { 00:13:48.518 "name": "BaseBdev3", 00:13:48.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.518 "is_configured": false, 00:13:48.518 "data_offset": 0, 00:13:48.518 "data_size": 0 00:13:48.518 } 00:13:48.518 ] 00:13:48.518 }' 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.518 04:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.085 [2024-11-27 04:35:36.468933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.085 [2024-11-27 04:35:36.469108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.085 [2024-11-27 04:35:36.476927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:49.085 [2024-11-27 04:35:36.477111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:49.085 [2024-11-27 04:35:36.477230] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.085 [2024-11-27 04:35:36.477292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.085 [2024-11-27 04:35:36.477500] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:49.085 [2024-11-27 04:35:36.477563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.085 [2024-11-27 04:35:36.523295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.085 BaseBdev1 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.085 [ 00:13:49.085 { 00:13:49.085 "name": "BaseBdev1", 00:13:49.085 "aliases": [ 00:13:49.085 "b6fba3b8-2c2f-4942-a08c-44d0af14f009" 00:13:49.085 ], 00:13:49.085 "product_name": "Malloc disk", 00:13:49.085 "block_size": 512, 00:13:49.085 "num_blocks": 65536, 00:13:49.085 "uuid": "b6fba3b8-2c2f-4942-a08c-44d0af14f009", 00:13:49.085 "assigned_rate_limits": { 00:13:49.085 "rw_ios_per_sec": 0, 00:13:49.085 "rw_mbytes_per_sec": 0, 00:13:49.085 "r_mbytes_per_sec": 0, 00:13:49.085 "w_mbytes_per_sec": 0 00:13:49.085 }, 00:13:49.085 "claimed": true, 00:13:49.085 "claim_type": "exclusive_write", 00:13:49.085 "zoned": false, 00:13:49.085 "supported_io_types": { 00:13:49.085 "read": true, 00:13:49.085 "write": true, 00:13:49.085 "unmap": true, 00:13:49.085 "flush": true, 00:13:49.085 "reset": true, 00:13:49.085 "nvme_admin": false, 00:13:49.085 "nvme_io": false, 00:13:49.085 "nvme_io_md": false, 00:13:49.085 "write_zeroes": true, 00:13:49.085 "zcopy": true, 00:13:49.085 "get_zone_info": false, 00:13:49.085 "zone_management": false, 00:13:49.085 "zone_append": false, 00:13:49.085 "compare": false, 00:13:49.085 "compare_and_write": false, 00:13:49.085 "abort": true, 00:13:49.085 "seek_hole": false, 00:13:49.085 "seek_data": false, 00:13:49.085 "copy": true, 00:13:49.085 "nvme_iov_md": false 00:13:49.085 }, 00:13:49.085 "memory_domains": [ 00:13:49.085 { 00:13:49.085 "dma_device_id": "system", 00:13:49.085 "dma_device_type": 1 00:13:49.085 }, 00:13:49.085 { 00:13:49.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.085 "dma_device_type": 2 00:13:49.085 } 00:13:49.085 ], 00:13:49.085 "driver_specific": {} 00:13:49.085 } 00:13:49.085 ] 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.085 "name": "Existed_Raid", 00:13:49.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.085 "strip_size_kb": 64, 00:13:49.085 "state": "configuring", 00:13:49.085 "raid_level": "concat", 00:13:49.085 "superblock": false, 00:13:49.085 "num_base_bdevs": 3, 00:13:49.085 "num_base_bdevs_discovered": 1, 00:13:49.085 "num_base_bdevs_operational": 3, 00:13:49.085 "base_bdevs_list": [ 00:13:49.085 { 00:13:49.085 "name": "BaseBdev1", 00:13:49.085 "uuid": "b6fba3b8-2c2f-4942-a08c-44d0af14f009", 00:13:49.085 "is_configured": true, 00:13:49.085 "data_offset": 0, 00:13:49.085 "data_size": 65536 00:13:49.085 }, 00:13:49.085 { 00:13:49.085 "name": "BaseBdev2", 00:13:49.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.085 "is_configured": false, 00:13:49.085 "data_offset": 0, 00:13:49.085 "data_size": 0 00:13:49.085 }, 00:13:49.085 { 00:13:49.085 "name": "BaseBdev3", 00:13:49.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.085 "is_configured": false, 00:13:49.085 "data_offset": 0, 00:13:49.085 "data_size": 0 00:13:49.085 } 00:13:49.085 ] 00:13:49.085 }' 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.085 04:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.651 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:49.651 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.651 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.651 [2024-11-27 04:35:37.035496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.651 [2024-11-27 04:35:37.035561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:49.651 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.651 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:49.651 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.651 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.651 [2024-11-27 04:35:37.043536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.651 [2024-11-27 04:35:37.046130] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.651 [2024-11-27 04:35:37.046304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.651 [2024-11-27 04:35:37.046332] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:49.651 [2024-11-27 04:35:37.046350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:49.651 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.652 "name": "Existed_Raid", 00:13:49.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.652 "strip_size_kb": 64, 00:13:49.652 "state": "configuring", 00:13:49.652 "raid_level": "concat", 00:13:49.652 "superblock": false, 00:13:49.652 "num_base_bdevs": 3, 00:13:49.652 "num_base_bdevs_discovered": 1, 00:13:49.652 "num_base_bdevs_operational": 3, 00:13:49.652 "base_bdevs_list": [ 00:13:49.652 { 00:13:49.652 "name": "BaseBdev1", 00:13:49.652 "uuid": "b6fba3b8-2c2f-4942-a08c-44d0af14f009", 00:13:49.652 "is_configured": true, 00:13:49.652 "data_offset": 0, 00:13:49.652 "data_size": 65536 00:13:49.652 }, 00:13:49.652 { 00:13:49.652 "name": "BaseBdev2", 00:13:49.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.652 "is_configured": false, 00:13:49.652 "data_offset": 0, 00:13:49.652 "data_size": 0 00:13:49.652 }, 00:13:49.652 { 00:13:49.652 "name": "BaseBdev3", 00:13:49.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.652 "is_configured": false, 00:13:49.652 "data_offset": 0, 00:13:49.652 "data_size": 0 00:13:49.652 } 00:13:49.652 ] 00:13:49.652 }' 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.652 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.219 BaseBdev2 00:13:50.219 [2024-11-27 04:35:37.582320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.219 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.219 [ 00:13:50.219 { 00:13:50.219 "name": "BaseBdev2", 00:13:50.219 "aliases": [ 00:13:50.219 "216658b4-5307-4856-b517-9c0e36b0fdd5" 00:13:50.219 ], 00:13:50.219 "product_name": "Malloc disk", 00:13:50.219 "block_size": 512, 00:13:50.219 "num_blocks": 65536, 00:13:50.219 "uuid": "216658b4-5307-4856-b517-9c0e36b0fdd5", 00:13:50.219 "assigned_rate_limits": { 00:13:50.219 "rw_ios_per_sec": 0, 00:13:50.219 "rw_mbytes_per_sec": 0, 00:13:50.219 "r_mbytes_per_sec": 0, 00:13:50.219 "w_mbytes_per_sec": 0 00:13:50.219 }, 00:13:50.219 "claimed": true, 00:13:50.219 "claim_type": "exclusive_write", 00:13:50.219 "zoned": false, 00:13:50.219 "supported_io_types": { 00:13:50.219 "read": true, 00:13:50.219 "write": true, 00:13:50.219 "unmap": true, 00:13:50.220 "flush": true, 00:13:50.220 "reset": true, 00:13:50.220 "nvme_admin": false, 00:13:50.220 "nvme_io": false, 00:13:50.220 "nvme_io_md": false, 00:13:50.220 "write_zeroes": true, 00:13:50.220 "zcopy": true, 00:13:50.220 "get_zone_info": false, 00:13:50.220 "zone_management": false, 00:13:50.220 "zone_append": false, 00:13:50.220 "compare": false, 00:13:50.220 "compare_and_write": false, 00:13:50.220 "abort": true, 00:13:50.220 "seek_hole": false, 00:13:50.220 "seek_data": false, 00:13:50.220 "copy": true, 00:13:50.220 "nvme_iov_md": false 00:13:50.220 }, 00:13:50.220 "memory_domains": [ 00:13:50.220 { 00:13:50.220 "dma_device_id": "system", 00:13:50.220 "dma_device_type": 1 00:13:50.220 }, 00:13:50.220 { 00:13:50.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.220 "dma_device_type": 2 00:13:50.220 } 00:13:50.220 ], 00:13:50.220 "driver_specific": {} 00:13:50.220 } 00:13:50.220 ] 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.220 "name": "Existed_Raid", 00:13:50.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.220 "strip_size_kb": 64, 00:13:50.220 "state": "configuring", 00:13:50.220 "raid_level": "concat", 00:13:50.220 "superblock": false, 00:13:50.220 "num_base_bdevs": 3, 00:13:50.220 "num_base_bdevs_discovered": 2, 00:13:50.220 "num_base_bdevs_operational": 3, 00:13:50.220 "base_bdevs_list": [ 00:13:50.220 { 00:13:50.220 "name": "BaseBdev1", 00:13:50.220 "uuid": "b6fba3b8-2c2f-4942-a08c-44d0af14f009", 00:13:50.220 "is_configured": true, 00:13:50.220 "data_offset": 0, 00:13:50.220 "data_size": 65536 00:13:50.220 }, 00:13:50.220 { 00:13:50.220 "name": "BaseBdev2", 00:13:50.220 "uuid": "216658b4-5307-4856-b517-9c0e36b0fdd5", 00:13:50.220 "is_configured": true, 00:13:50.220 "data_offset": 0, 00:13:50.220 "data_size": 65536 00:13:50.220 }, 00:13:50.220 { 00:13:50.220 "name": "BaseBdev3", 00:13:50.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.220 "is_configured": false, 00:13:50.220 "data_offset": 0, 00:13:50.220 "data_size": 0 00:13:50.220 } 00:13:50.220 ] 00:13:50.220 }' 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.220 04:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.788 [2024-11-27 04:35:38.171093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.788 [2024-11-27 04:35:38.171150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:50.788 [2024-11-27 04:35:38.171186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:50.788 [2024-11-27 04:35:38.171538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:50.788 [2024-11-27 04:35:38.171805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:50.788 [2024-11-27 04:35:38.171824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:50.788 [2024-11-27 04:35:38.172177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.788 BaseBdev3 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.788 [ 00:13:50.788 { 00:13:50.788 "name": "BaseBdev3", 00:13:50.788 "aliases": [ 00:13:50.788 "6a17d40c-628c-422d-9a29-7dea63e07712" 00:13:50.788 ], 00:13:50.788 "product_name": "Malloc disk", 00:13:50.788 "block_size": 512, 00:13:50.788 "num_blocks": 65536, 00:13:50.788 "uuid": "6a17d40c-628c-422d-9a29-7dea63e07712", 00:13:50.788 "assigned_rate_limits": { 00:13:50.788 "rw_ios_per_sec": 0, 00:13:50.788 "rw_mbytes_per_sec": 0, 00:13:50.788 "r_mbytes_per_sec": 0, 00:13:50.788 "w_mbytes_per_sec": 0 00:13:50.788 }, 00:13:50.788 "claimed": true, 00:13:50.788 "claim_type": "exclusive_write", 00:13:50.788 "zoned": false, 00:13:50.788 "supported_io_types": { 00:13:50.788 "read": true, 00:13:50.788 "write": true, 00:13:50.788 "unmap": true, 00:13:50.788 "flush": true, 00:13:50.788 "reset": true, 00:13:50.788 "nvme_admin": false, 00:13:50.788 "nvme_io": false, 00:13:50.788 "nvme_io_md": false, 00:13:50.788 "write_zeroes": true, 00:13:50.788 "zcopy": true, 00:13:50.788 "get_zone_info": false, 00:13:50.788 "zone_management": false, 00:13:50.788 "zone_append": false, 00:13:50.788 "compare": false, 00:13:50.788 "compare_and_write": false, 00:13:50.788 "abort": true, 00:13:50.788 "seek_hole": false, 00:13:50.788 "seek_data": false, 00:13:50.788 "copy": true, 00:13:50.788 "nvme_iov_md": false 00:13:50.788 }, 00:13:50.788 "memory_domains": [ 00:13:50.788 { 00:13:50.788 "dma_device_id": "system", 00:13:50.788 "dma_device_type": 1 00:13:50.788 }, 00:13:50.788 { 00:13:50.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.788 "dma_device_type": 2 00:13:50.788 } 00:13:50.788 ], 00:13:50.788 "driver_specific": {} 00:13:50.788 } 00:13:50.788 ] 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.788 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.788 "name": "Existed_Raid", 00:13:50.788 "uuid": "871f2fe7-f62c-4a46-9c4a-c26d4df6df63", 00:13:50.788 "strip_size_kb": 64, 00:13:50.788 "state": "online", 00:13:50.788 "raid_level": "concat", 00:13:50.789 "superblock": false, 00:13:50.789 "num_base_bdevs": 3, 00:13:50.789 "num_base_bdevs_discovered": 3, 00:13:50.789 "num_base_bdevs_operational": 3, 00:13:50.789 "base_bdevs_list": [ 00:13:50.789 { 00:13:50.789 "name": "BaseBdev1", 00:13:50.789 "uuid": "b6fba3b8-2c2f-4942-a08c-44d0af14f009", 00:13:50.789 "is_configured": true, 00:13:50.789 "data_offset": 0, 00:13:50.789 "data_size": 65536 00:13:50.789 }, 00:13:50.789 { 00:13:50.789 "name": "BaseBdev2", 00:13:50.789 "uuid": "216658b4-5307-4856-b517-9c0e36b0fdd5", 00:13:50.789 "is_configured": true, 00:13:50.789 "data_offset": 0, 00:13:50.789 "data_size": 65536 00:13:50.789 }, 00:13:50.789 { 00:13:50.789 "name": "BaseBdev3", 00:13:50.789 "uuid": "6a17d40c-628c-422d-9a29-7dea63e07712", 00:13:50.789 "is_configured": true, 00:13:50.789 "data_offset": 0, 00:13:50.789 "data_size": 65536 00:13:50.789 } 00:13:50.789 ] 00:13:50.789 }' 00:13:50.789 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.789 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.354 [2024-11-27 04:35:38.711643] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.354 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:51.354 "name": "Existed_Raid", 00:13:51.354 "aliases": [ 00:13:51.354 "871f2fe7-f62c-4a46-9c4a-c26d4df6df63" 00:13:51.354 ], 00:13:51.354 "product_name": "Raid Volume", 00:13:51.354 "block_size": 512, 00:13:51.354 "num_blocks": 196608, 00:13:51.354 "uuid": "871f2fe7-f62c-4a46-9c4a-c26d4df6df63", 00:13:51.354 "assigned_rate_limits": { 00:13:51.354 "rw_ios_per_sec": 0, 00:13:51.354 "rw_mbytes_per_sec": 0, 00:13:51.354 "r_mbytes_per_sec": 0, 00:13:51.354 "w_mbytes_per_sec": 0 00:13:51.354 }, 00:13:51.354 "claimed": false, 00:13:51.354 "zoned": false, 00:13:51.354 "supported_io_types": { 00:13:51.354 "read": true, 00:13:51.354 "write": true, 00:13:51.354 "unmap": true, 00:13:51.354 "flush": true, 00:13:51.354 "reset": true, 00:13:51.354 "nvme_admin": false, 00:13:51.354 "nvme_io": false, 00:13:51.354 "nvme_io_md": false, 00:13:51.354 "write_zeroes": true, 00:13:51.354 "zcopy": false, 00:13:51.354 "get_zone_info": false, 00:13:51.355 "zone_management": false, 00:13:51.355 "zone_append": false, 00:13:51.355 "compare": false, 00:13:51.355 "compare_and_write": false, 00:13:51.355 "abort": false, 00:13:51.355 "seek_hole": false, 00:13:51.355 "seek_data": false, 00:13:51.355 "copy": false, 00:13:51.355 "nvme_iov_md": false 00:13:51.355 }, 00:13:51.355 "memory_domains": [ 00:13:51.355 { 00:13:51.355 "dma_device_id": "system", 00:13:51.355 "dma_device_type": 1 00:13:51.355 }, 00:13:51.355 { 00:13:51.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.355 "dma_device_type": 2 00:13:51.355 }, 00:13:51.355 { 00:13:51.355 "dma_device_id": "system", 00:13:51.355 "dma_device_type": 1 00:13:51.355 }, 00:13:51.355 { 00:13:51.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.355 "dma_device_type": 2 00:13:51.355 }, 00:13:51.355 { 00:13:51.355 "dma_device_id": "system", 00:13:51.355 "dma_device_type": 1 00:13:51.355 }, 00:13:51.355 { 00:13:51.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.355 "dma_device_type": 2 00:13:51.355 } 00:13:51.355 ], 00:13:51.355 "driver_specific": { 00:13:51.355 "raid": { 00:13:51.355 "uuid": "871f2fe7-f62c-4a46-9c4a-c26d4df6df63", 00:13:51.355 "strip_size_kb": 64, 00:13:51.355 "state": "online", 00:13:51.355 "raid_level": "concat", 00:13:51.355 "superblock": false, 00:13:51.355 "num_base_bdevs": 3, 00:13:51.355 "num_base_bdevs_discovered": 3, 00:13:51.355 "num_base_bdevs_operational": 3, 00:13:51.355 "base_bdevs_list": [ 00:13:51.355 { 00:13:51.355 "name": "BaseBdev1", 00:13:51.355 "uuid": "b6fba3b8-2c2f-4942-a08c-44d0af14f009", 00:13:51.355 "is_configured": true, 00:13:51.355 "data_offset": 0, 00:13:51.355 "data_size": 65536 00:13:51.355 }, 00:13:51.355 { 00:13:51.355 "name": "BaseBdev2", 00:13:51.355 "uuid": "216658b4-5307-4856-b517-9c0e36b0fdd5", 00:13:51.355 "is_configured": true, 00:13:51.355 "data_offset": 0, 00:13:51.355 "data_size": 65536 00:13:51.355 }, 00:13:51.355 { 00:13:51.355 "name": "BaseBdev3", 00:13:51.355 "uuid": "6a17d40c-628c-422d-9a29-7dea63e07712", 00:13:51.355 "is_configured": true, 00:13:51.355 "data_offset": 0, 00:13:51.355 "data_size": 65536 00:13:51.355 } 00:13:51.355 ] 00:13:51.355 } 00:13:51.355 } 00:13:51.355 }' 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:51.355 BaseBdev2 00:13:51.355 BaseBdev3' 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.355 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.613 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.613 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.613 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.613 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:51.613 04:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.613 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.613 04:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.614 [2024-11-27 04:35:39.043414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.614 [2024-11-27 04:35:39.043576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.614 [2024-11-27 04:35:39.043800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.614 "name": "Existed_Raid", 00:13:51.614 "uuid": "871f2fe7-f62c-4a46-9c4a-c26d4df6df63", 00:13:51.614 "strip_size_kb": 64, 00:13:51.614 "state": "offline", 00:13:51.614 "raid_level": "concat", 00:13:51.614 "superblock": false, 00:13:51.614 "num_base_bdevs": 3, 00:13:51.614 "num_base_bdevs_discovered": 2, 00:13:51.614 "num_base_bdevs_operational": 2, 00:13:51.614 "base_bdevs_list": [ 00:13:51.614 { 00:13:51.614 "name": null, 00:13:51.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.614 "is_configured": false, 00:13:51.614 "data_offset": 0, 00:13:51.614 "data_size": 65536 00:13:51.614 }, 00:13:51.614 { 00:13:51.614 "name": "BaseBdev2", 00:13:51.614 "uuid": "216658b4-5307-4856-b517-9c0e36b0fdd5", 00:13:51.614 "is_configured": true, 00:13:51.614 "data_offset": 0, 00:13:51.614 "data_size": 65536 00:13:51.614 }, 00:13:51.614 { 00:13:51.614 "name": "BaseBdev3", 00:13:51.614 "uuid": "6a17d40c-628c-422d-9a29-7dea63e07712", 00:13:51.614 "is_configured": true, 00:13:51.614 "data_offset": 0, 00:13:51.614 "data_size": 65536 00:13:51.614 } 00:13:51.614 ] 00:13:51.614 }' 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.614 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.181 [2024-11-27 04:35:39.690929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.181 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:52.182 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.182 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.182 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.182 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.182 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:52.182 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.440 [2024-11-27 04:35:39.842377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:52.440 [2024-11-27 04:35:39.842576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.440 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.441 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.441 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:52.441 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:52.441 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:52.441 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:52.441 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.441 04:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.441 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.441 04:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.441 BaseBdev2 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.441 [ 00:13:52.441 { 00:13:52.441 "name": "BaseBdev2", 00:13:52.441 "aliases": [ 00:13:52.441 "dc7ea6ac-da59-4312-aafc-2222029acbd6" 00:13:52.441 ], 00:13:52.441 "product_name": "Malloc disk", 00:13:52.441 "block_size": 512, 00:13:52.441 "num_blocks": 65536, 00:13:52.441 "uuid": "dc7ea6ac-da59-4312-aafc-2222029acbd6", 00:13:52.441 "assigned_rate_limits": { 00:13:52.441 "rw_ios_per_sec": 0, 00:13:52.441 "rw_mbytes_per_sec": 0, 00:13:52.441 "r_mbytes_per_sec": 0, 00:13:52.441 "w_mbytes_per_sec": 0 00:13:52.441 }, 00:13:52.441 "claimed": false, 00:13:52.441 "zoned": false, 00:13:52.441 "supported_io_types": { 00:13:52.441 "read": true, 00:13:52.441 "write": true, 00:13:52.441 "unmap": true, 00:13:52.441 "flush": true, 00:13:52.441 "reset": true, 00:13:52.441 "nvme_admin": false, 00:13:52.441 "nvme_io": false, 00:13:52.441 "nvme_io_md": false, 00:13:52.441 "write_zeroes": true, 00:13:52.441 "zcopy": true, 00:13:52.441 "get_zone_info": false, 00:13:52.441 "zone_management": false, 00:13:52.441 "zone_append": false, 00:13:52.441 "compare": false, 00:13:52.441 "compare_and_write": false, 00:13:52.441 "abort": true, 00:13:52.441 "seek_hole": false, 00:13:52.441 "seek_data": false, 00:13:52.441 "copy": true, 00:13:52.441 "nvme_iov_md": false 00:13:52.441 }, 00:13:52.441 "memory_domains": [ 00:13:52.441 { 00:13:52.441 "dma_device_id": "system", 00:13:52.441 "dma_device_type": 1 00:13:52.441 }, 00:13:52.441 { 00:13:52.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.441 "dma_device_type": 2 00:13:52.441 } 00:13:52.441 ], 00:13:52.441 "driver_specific": {} 00:13:52.441 } 00:13:52.441 ] 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.441 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.784 BaseBdev3 00:13:52.784 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.784 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:52.784 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:52.784 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.784 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:52.784 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.784 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.785 [ 00:13:52.785 { 00:13:52.785 "name": "BaseBdev3", 00:13:52.785 "aliases": [ 00:13:52.785 "e4ad2b97-51af-4ab3-8e4a-578b9f613182" 00:13:52.785 ], 00:13:52.785 "product_name": "Malloc disk", 00:13:52.785 "block_size": 512, 00:13:52.785 "num_blocks": 65536, 00:13:52.785 "uuid": "e4ad2b97-51af-4ab3-8e4a-578b9f613182", 00:13:52.785 "assigned_rate_limits": { 00:13:52.785 "rw_ios_per_sec": 0, 00:13:52.785 "rw_mbytes_per_sec": 0, 00:13:52.785 "r_mbytes_per_sec": 0, 00:13:52.785 "w_mbytes_per_sec": 0 00:13:52.785 }, 00:13:52.785 "claimed": false, 00:13:52.785 "zoned": false, 00:13:52.785 "supported_io_types": { 00:13:52.785 "read": true, 00:13:52.785 "write": true, 00:13:52.785 "unmap": true, 00:13:52.785 "flush": true, 00:13:52.785 "reset": true, 00:13:52.785 "nvme_admin": false, 00:13:52.785 "nvme_io": false, 00:13:52.785 "nvme_io_md": false, 00:13:52.785 "write_zeroes": true, 00:13:52.785 "zcopy": true, 00:13:52.785 "get_zone_info": false, 00:13:52.785 "zone_management": false, 00:13:52.785 "zone_append": false, 00:13:52.785 "compare": false, 00:13:52.785 "compare_and_write": false, 00:13:52.785 "abort": true, 00:13:52.785 "seek_hole": false, 00:13:52.785 "seek_data": false, 00:13:52.785 "copy": true, 00:13:52.785 "nvme_iov_md": false 00:13:52.785 }, 00:13:52.785 "memory_domains": [ 00:13:52.785 { 00:13:52.785 "dma_device_id": "system", 00:13:52.785 "dma_device_type": 1 00:13:52.785 }, 00:13:52.785 { 00:13:52.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.785 "dma_device_type": 2 00:13:52.785 } 00:13:52.785 ], 00:13:52.785 "driver_specific": {} 00:13:52.785 } 00:13:52.785 ] 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.785 [2024-11-27 04:35:40.134077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.785 [2024-11-27 04:35:40.134267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.785 [2024-11-27 04:35:40.134423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.785 [2024-11-27 04:35:40.136950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.785 "name": "Existed_Raid", 00:13:52.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.785 "strip_size_kb": 64, 00:13:52.785 "state": "configuring", 00:13:52.785 "raid_level": "concat", 00:13:52.785 "superblock": false, 00:13:52.785 "num_base_bdevs": 3, 00:13:52.785 "num_base_bdevs_discovered": 2, 00:13:52.785 "num_base_bdevs_operational": 3, 00:13:52.785 "base_bdevs_list": [ 00:13:52.785 { 00:13:52.785 "name": "BaseBdev1", 00:13:52.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.785 "is_configured": false, 00:13:52.785 "data_offset": 0, 00:13:52.785 "data_size": 0 00:13:52.785 }, 00:13:52.785 { 00:13:52.785 "name": "BaseBdev2", 00:13:52.785 "uuid": "dc7ea6ac-da59-4312-aafc-2222029acbd6", 00:13:52.785 "is_configured": true, 00:13:52.785 "data_offset": 0, 00:13:52.785 "data_size": 65536 00:13:52.785 }, 00:13:52.785 { 00:13:52.785 "name": "BaseBdev3", 00:13:52.785 "uuid": "e4ad2b97-51af-4ab3-8e4a-578b9f613182", 00:13:52.785 "is_configured": true, 00:13:52.785 "data_offset": 0, 00:13:52.785 "data_size": 65536 00:13:52.785 } 00:13:52.785 ] 00:13:52.785 }' 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.785 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.067 [2024-11-27 04:35:40.662359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.067 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.326 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.326 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.326 "name": "Existed_Raid", 00:13:53.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.326 "strip_size_kb": 64, 00:13:53.326 "state": "configuring", 00:13:53.326 "raid_level": "concat", 00:13:53.326 "superblock": false, 00:13:53.326 "num_base_bdevs": 3, 00:13:53.326 "num_base_bdevs_discovered": 1, 00:13:53.326 "num_base_bdevs_operational": 3, 00:13:53.326 "base_bdevs_list": [ 00:13:53.326 { 00:13:53.326 "name": "BaseBdev1", 00:13:53.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.326 "is_configured": false, 00:13:53.326 "data_offset": 0, 00:13:53.326 "data_size": 0 00:13:53.326 }, 00:13:53.326 { 00:13:53.326 "name": null, 00:13:53.326 "uuid": "dc7ea6ac-da59-4312-aafc-2222029acbd6", 00:13:53.326 "is_configured": false, 00:13:53.326 "data_offset": 0, 00:13:53.326 "data_size": 65536 00:13:53.326 }, 00:13:53.326 { 00:13:53.326 "name": "BaseBdev3", 00:13:53.326 "uuid": "e4ad2b97-51af-4ab3-8e4a-578b9f613182", 00:13:53.326 "is_configured": true, 00:13:53.326 "data_offset": 0, 00:13:53.326 "data_size": 65536 00:13:53.326 } 00:13:53.326 ] 00:13:53.326 }' 00:13:53.326 04:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.326 04:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.585 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.585 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.585 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.585 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.844 [2024-11-27 04:35:41.291925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.844 BaseBdev1 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.844 [ 00:13:53.844 { 00:13:53.844 "name": "BaseBdev1", 00:13:53.844 "aliases": [ 00:13:53.844 "5372e3d1-7112-4a6d-b62b-c9ba3d02890e" 00:13:53.844 ], 00:13:53.844 "product_name": "Malloc disk", 00:13:53.844 "block_size": 512, 00:13:53.844 "num_blocks": 65536, 00:13:53.844 "uuid": "5372e3d1-7112-4a6d-b62b-c9ba3d02890e", 00:13:53.844 "assigned_rate_limits": { 00:13:53.844 "rw_ios_per_sec": 0, 00:13:53.844 "rw_mbytes_per_sec": 0, 00:13:53.844 "r_mbytes_per_sec": 0, 00:13:53.844 "w_mbytes_per_sec": 0 00:13:53.844 }, 00:13:53.844 "claimed": true, 00:13:53.844 "claim_type": "exclusive_write", 00:13:53.844 "zoned": false, 00:13:53.844 "supported_io_types": { 00:13:53.844 "read": true, 00:13:53.844 "write": true, 00:13:53.844 "unmap": true, 00:13:53.844 "flush": true, 00:13:53.844 "reset": true, 00:13:53.844 "nvme_admin": false, 00:13:53.844 "nvme_io": false, 00:13:53.844 "nvme_io_md": false, 00:13:53.844 "write_zeroes": true, 00:13:53.844 "zcopy": true, 00:13:53.844 "get_zone_info": false, 00:13:53.844 "zone_management": false, 00:13:53.844 "zone_append": false, 00:13:53.844 "compare": false, 00:13:53.844 "compare_and_write": false, 00:13:53.844 "abort": true, 00:13:53.844 "seek_hole": false, 00:13:53.844 "seek_data": false, 00:13:53.844 "copy": true, 00:13:53.844 "nvme_iov_md": false 00:13:53.844 }, 00:13:53.844 "memory_domains": [ 00:13:53.844 { 00:13:53.844 "dma_device_id": "system", 00:13:53.844 "dma_device_type": 1 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.844 "dma_device_type": 2 00:13:53.844 } 00:13:53.844 ], 00:13:53.844 "driver_specific": {} 00:13:53.844 } 00:13:53.844 ] 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.844 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.844 "name": "Existed_Raid", 00:13:53.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.844 "strip_size_kb": 64, 00:13:53.844 "state": "configuring", 00:13:53.844 "raid_level": "concat", 00:13:53.844 "superblock": false, 00:13:53.844 "num_base_bdevs": 3, 00:13:53.844 "num_base_bdevs_discovered": 2, 00:13:53.844 "num_base_bdevs_operational": 3, 00:13:53.844 "base_bdevs_list": [ 00:13:53.844 { 00:13:53.844 "name": "BaseBdev1", 00:13:53.844 "uuid": "5372e3d1-7112-4a6d-b62b-c9ba3d02890e", 00:13:53.844 "is_configured": true, 00:13:53.844 "data_offset": 0, 00:13:53.844 "data_size": 65536 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "name": null, 00:13:53.844 "uuid": "dc7ea6ac-da59-4312-aafc-2222029acbd6", 00:13:53.844 "is_configured": false, 00:13:53.844 "data_offset": 0, 00:13:53.844 "data_size": 65536 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "name": "BaseBdev3", 00:13:53.844 "uuid": "e4ad2b97-51af-4ab3-8e4a-578b9f613182", 00:13:53.844 "is_configured": true, 00:13:53.844 "data_offset": 0, 00:13:53.844 "data_size": 65536 00:13:53.844 } 00:13:53.844 ] 00:13:53.845 }' 00:13:53.845 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.845 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.460 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.461 [2024-11-27 04:35:41.900157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.461 "name": "Existed_Raid", 00:13:54.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.461 "strip_size_kb": 64, 00:13:54.461 "state": "configuring", 00:13:54.461 "raid_level": "concat", 00:13:54.461 "superblock": false, 00:13:54.461 "num_base_bdevs": 3, 00:13:54.461 "num_base_bdevs_discovered": 1, 00:13:54.461 "num_base_bdevs_operational": 3, 00:13:54.461 "base_bdevs_list": [ 00:13:54.461 { 00:13:54.461 "name": "BaseBdev1", 00:13:54.461 "uuid": "5372e3d1-7112-4a6d-b62b-c9ba3d02890e", 00:13:54.461 "is_configured": true, 00:13:54.461 "data_offset": 0, 00:13:54.461 "data_size": 65536 00:13:54.461 }, 00:13:54.461 { 00:13:54.461 "name": null, 00:13:54.461 "uuid": "dc7ea6ac-da59-4312-aafc-2222029acbd6", 00:13:54.461 "is_configured": false, 00:13:54.461 "data_offset": 0, 00:13:54.461 "data_size": 65536 00:13:54.461 }, 00:13:54.461 { 00:13:54.461 "name": null, 00:13:54.461 "uuid": "e4ad2b97-51af-4ab3-8e4a-578b9f613182", 00:13:54.461 "is_configured": false, 00:13:54.461 "data_offset": 0, 00:13:54.461 "data_size": 65536 00:13:54.461 } 00:13:54.461 ] 00:13:54.461 }' 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.461 04:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.030 [2024-11-27 04:35:42.472348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.030 "name": "Existed_Raid", 00:13:55.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.030 "strip_size_kb": 64, 00:13:55.030 "state": "configuring", 00:13:55.030 "raid_level": "concat", 00:13:55.030 "superblock": false, 00:13:55.030 "num_base_bdevs": 3, 00:13:55.030 "num_base_bdevs_discovered": 2, 00:13:55.030 "num_base_bdevs_operational": 3, 00:13:55.030 "base_bdevs_list": [ 00:13:55.030 { 00:13:55.030 "name": "BaseBdev1", 00:13:55.030 "uuid": "5372e3d1-7112-4a6d-b62b-c9ba3d02890e", 00:13:55.030 "is_configured": true, 00:13:55.030 "data_offset": 0, 00:13:55.030 "data_size": 65536 00:13:55.030 }, 00:13:55.030 { 00:13:55.030 "name": null, 00:13:55.030 "uuid": "dc7ea6ac-da59-4312-aafc-2222029acbd6", 00:13:55.030 "is_configured": false, 00:13:55.030 "data_offset": 0, 00:13:55.030 "data_size": 65536 00:13:55.030 }, 00:13:55.030 { 00:13:55.030 "name": "BaseBdev3", 00:13:55.030 "uuid": "e4ad2b97-51af-4ab3-8e4a-578b9f613182", 00:13:55.030 "is_configured": true, 00:13:55.030 "data_offset": 0, 00:13:55.030 "data_size": 65536 00:13:55.030 } 00:13:55.030 ] 00:13:55.030 }' 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.030 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.597 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.597 04:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:55.597 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.597 04:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.597 [2024-11-27 04:35:43.044465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.597 "name": "Existed_Raid", 00:13:55.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.597 "strip_size_kb": 64, 00:13:55.597 "state": "configuring", 00:13:55.597 "raid_level": "concat", 00:13:55.597 "superblock": false, 00:13:55.597 "num_base_bdevs": 3, 00:13:55.597 "num_base_bdevs_discovered": 1, 00:13:55.597 "num_base_bdevs_operational": 3, 00:13:55.597 "base_bdevs_list": [ 00:13:55.597 { 00:13:55.597 "name": null, 00:13:55.597 "uuid": "5372e3d1-7112-4a6d-b62b-c9ba3d02890e", 00:13:55.597 "is_configured": false, 00:13:55.597 "data_offset": 0, 00:13:55.597 "data_size": 65536 00:13:55.597 }, 00:13:55.597 { 00:13:55.597 "name": null, 00:13:55.597 "uuid": "dc7ea6ac-da59-4312-aafc-2222029acbd6", 00:13:55.597 "is_configured": false, 00:13:55.597 "data_offset": 0, 00:13:55.597 "data_size": 65536 00:13:55.597 }, 00:13:55.597 { 00:13:55.597 "name": "BaseBdev3", 00:13:55.597 "uuid": "e4ad2b97-51af-4ab3-8e4a-578b9f613182", 00:13:55.597 "is_configured": true, 00:13:55.597 "data_offset": 0, 00:13:55.597 "data_size": 65536 00:13:55.597 } 00:13:55.597 ] 00:13:55.597 }' 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.597 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.163 [2024-11-27 04:35:43.708758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.163 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.163 "name": "Existed_Raid", 00:13:56.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.163 "strip_size_kb": 64, 00:13:56.163 "state": "configuring", 00:13:56.163 "raid_level": "concat", 00:13:56.163 "superblock": false, 00:13:56.163 "num_base_bdevs": 3, 00:13:56.163 "num_base_bdevs_discovered": 2, 00:13:56.163 "num_base_bdevs_operational": 3, 00:13:56.163 "base_bdevs_list": [ 00:13:56.163 { 00:13:56.163 "name": null, 00:13:56.163 "uuid": "5372e3d1-7112-4a6d-b62b-c9ba3d02890e", 00:13:56.163 "is_configured": false, 00:13:56.163 "data_offset": 0, 00:13:56.163 "data_size": 65536 00:13:56.163 }, 00:13:56.163 { 00:13:56.163 "name": "BaseBdev2", 00:13:56.163 "uuid": "dc7ea6ac-da59-4312-aafc-2222029acbd6", 00:13:56.163 "is_configured": true, 00:13:56.163 "data_offset": 0, 00:13:56.163 "data_size": 65536 00:13:56.163 }, 00:13:56.163 { 00:13:56.163 "name": "BaseBdev3", 00:13:56.163 "uuid": "e4ad2b97-51af-4ab3-8e4a-578b9f613182", 00:13:56.163 "is_configured": true, 00:13:56.163 "data_offset": 0, 00:13:56.163 "data_size": 65536 00:13:56.163 } 00:13:56.163 ] 00:13:56.163 }' 00:13:56.164 04:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.164 04:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5372e3d1-7112-4a6d-b62b-c9ba3d02890e 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.730 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.005 [2024-11-27 04:35:44.362472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:57.005 [2024-11-27 04:35:44.362753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:57.005 [2024-11-27 04:35:44.362804] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:57.005 [2024-11-27 04:35:44.363131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:57.005 [2024-11-27 04:35:44.363328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:57.005 [2024-11-27 04:35:44.363345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:57.005 [2024-11-27 04:35:44.363649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.005 NewBaseBdev 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.005 [ 00:13:57.005 { 00:13:57.005 "name": "NewBaseBdev", 00:13:57.005 "aliases": [ 00:13:57.005 "5372e3d1-7112-4a6d-b62b-c9ba3d02890e" 00:13:57.005 ], 00:13:57.005 "product_name": "Malloc disk", 00:13:57.005 "block_size": 512, 00:13:57.005 "num_blocks": 65536, 00:13:57.005 "uuid": "5372e3d1-7112-4a6d-b62b-c9ba3d02890e", 00:13:57.005 "assigned_rate_limits": { 00:13:57.005 "rw_ios_per_sec": 0, 00:13:57.005 "rw_mbytes_per_sec": 0, 00:13:57.005 "r_mbytes_per_sec": 0, 00:13:57.005 "w_mbytes_per_sec": 0 00:13:57.005 }, 00:13:57.005 "claimed": true, 00:13:57.005 "claim_type": "exclusive_write", 00:13:57.005 "zoned": false, 00:13:57.005 "supported_io_types": { 00:13:57.005 "read": true, 00:13:57.005 "write": true, 00:13:57.005 "unmap": true, 00:13:57.005 "flush": true, 00:13:57.005 "reset": true, 00:13:57.005 "nvme_admin": false, 00:13:57.005 "nvme_io": false, 00:13:57.005 "nvme_io_md": false, 00:13:57.005 "write_zeroes": true, 00:13:57.005 "zcopy": true, 00:13:57.005 "get_zone_info": false, 00:13:57.005 "zone_management": false, 00:13:57.005 "zone_append": false, 00:13:57.005 "compare": false, 00:13:57.005 "compare_and_write": false, 00:13:57.005 "abort": true, 00:13:57.005 "seek_hole": false, 00:13:57.005 "seek_data": false, 00:13:57.005 "copy": true, 00:13:57.005 "nvme_iov_md": false 00:13:57.005 }, 00:13:57.005 "memory_domains": [ 00:13:57.005 { 00:13:57.005 "dma_device_id": "system", 00:13:57.005 "dma_device_type": 1 00:13:57.005 }, 00:13:57.005 { 00:13:57.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.005 "dma_device_type": 2 00:13:57.005 } 00:13:57.005 ], 00:13:57.005 "driver_specific": {} 00:13:57.005 } 00:13:57.005 ] 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.005 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.006 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.006 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.006 "name": "Existed_Raid", 00:13:57.006 "uuid": "076addbb-33f0-431c-9818-4cb7a3c2d032", 00:13:57.006 "strip_size_kb": 64, 00:13:57.006 "state": "online", 00:13:57.006 "raid_level": "concat", 00:13:57.006 "superblock": false, 00:13:57.006 "num_base_bdevs": 3, 00:13:57.006 "num_base_bdevs_discovered": 3, 00:13:57.006 "num_base_bdevs_operational": 3, 00:13:57.006 "base_bdevs_list": [ 00:13:57.006 { 00:13:57.006 "name": "NewBaseBdev", 00:13:57.006 "uuid": "5372e3d1-7112-4a6d-b62b-c9ba3d02890e", 00:13:57.006 "is_configured": true, 00:13:57.006 "data_offset": 0, 00:13:57.006 "data_size": 65536 00:13:57.006 }, 00:13:57.006 { 00:13:57.006 "name": "BaseBdev2", 00:13:57.006 "uuid": "dc7ea6ac-da59-4312-aafc-2222029acbd6", 00:13:57.006 "is_configured": true, 00:13:57.006 "data_offset": 0, 00:13:57.006 "data_size": 65536 00:13:57.006 }, 00:13:57.006 { 00:13:57.006 "name": "BaseBdev3", 00:13:57.006 "uuid": "e4ad2b97-51af-4ab3-8e4a-578b9f613182", 00:13:57.006 "is_configured": true, 00:13:57.006 "data_offset": 0, 00:13:57.006 "data_size": 65536 00:13:57.006 } 00:13:57.006 ] 00:13:57.006 }' 00:13:57.006 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.006 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.263 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.263 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:57.263 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:57.263 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:57.263 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:57.263 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:57.263 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:57.263 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.263 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.263 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:57.263 [2024-11-27 04:35:44.875041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.522 04:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.522 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:57.522 "name": "Existed_Raid", 00:13:57.522 "aliases": [ 00:13:57.522 "076addbb-33f0-431c-9818-4cb7a3c2d032" 00:13:57.522 ], 00:13:57.522 "product_name": "Raid Volume", 00:13:57.522 "block_size": 512, 00:13:57.522 "num_blocks": 196608, 00:13:57.522 "uuid": "076addbb-33f0-431c-9818-4cb7a3c2d032", 00:13:57.522 "assigned_rate_limits": { 00:13:57.522 "rw_ios_per_sec": 0, 00:13:57.522 "rw_mbytes_per_sec": 0, 00:13:57.522 "r_mbytes_per_sec": 0, 00:13:57.522 "w_mbytes_per_sec": 0 00:13:57.522 }, 00:13:57.522 "claimed": false, 00:13:57.522 "zoned": false, 00:13:57.522 "supported_io_types": { 00:13:57.522 "read": true, 00:13:57.522 "write": true, 00:13:57.522 "unmap": true, 00:13:57.522 "flush": true, 00:13:57.522 "reset": true, 00:13:57.522 "nvme_admin": false, 00:13:57.522 "nvme_io": false, 00:13:57.522 "nvme_io_md": false, 00:13:57.522 "write_zeroes": true, 00:13:57.522 "zcopy": false, 00:13:57.522 "get_zone_info": false, 00:13:57.522 "zone_management": false, 00:13:57.522 "zone_append": false, 00:13:57.522 "compare": false, 00:13:57.522 "compare_and_write": false, 00:13:57.522 "abort": false, 00:13:57.522 "seek_hole": false, 00:13:57.522 "seek_data": false, 00:13:57.522 "copy": false, 00:13:57.522 "nvme_iov_md": false 00:13:57.522 }, 00:13:57.522 "memory_domains": [ 00:13:57.522 { 00:13:57.522 "dma_device_id": "system", 00:13:57.522 "dma_device_type": 1 00:13:57.522 }, 00:13:57.522 { 00:13:57.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.522 "dma_device_type": 2 00:13:57.522 }, 00:13:57.522 { 00:13:57.522 "dma_device_id": "system", 00:13:57.522 "dma_device_type": 1 00:13:57.522 }, 00:13:57.522 { 00:13:57.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.522 "dma_device_type": 2 00:13:57.522 }, 00:13:57.522 { 00:13:57.522 "dma_device_id": "system", 00:13:57.522 "dma_device_type": 1 00:13:57.522 }, 00:13:57.522 { 00:13:57.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.522 "dma_device_type": 2 00:13:57.522 } 00:13:57.522 ], 00:13:57.522 "driver_specific": { 00:13:57.522 "raid": { 00:13:57.522 "uuid": "076addbb-33f0-431c-9818-4cb7a3c2d032", 00:13:57.522 "strip_size_kb": 64, 00:13:57.522 "state": "online", 00:13:57.522 "raid_level": "concat", 00:13:57.522 "superblock": false, 00:13:57.522 "num_base_bdevs": 3, 00:13:57.522 "num_base_bdevs_discovered": 3, 00:13:57.522 "num_base_bdevs_operational": 3, 00:13:57.522 "base_bdevs_list": [ 00:13:57.522 { 00:13:57.522 "name": "NewBaseBdev", 00:13:57.522 "uuid": "5372e3d1-7112-4a6d-b62b-c9ba3d02890e", 00:13:57.522 "is_configured": true, 00:13:57.522 "data_offset": 0, 00:13:57.522 "data_size": 65536 00:13:57.522 }, 00:13:57.522 { 00:13:57.522 "name": "BaseBdev2", 00:13:57.522 "uuid": "dc7ea6ac-da59-4312-aafc-2222029acbd6", 00:13:57.522 "is_configured": true, 00:13:57.522 "data_offset": 0, 00:13:57.522 "data_size": 65536 00:13:57.522 }, 00:13:57.522 { 00:13:57.522 "name": "BaseBdev3", 00:13:57.522 "uuid": "e4ad2b97-51af-4ab3-8e4a-578b9f613182", 00:13:57.522 "is_configured": true, 00:13:57.522 "data_offset": 0, 00:13:57.522 "data_size": 65536 00:13:57.522 } 00:13:57.522 ] 00:13:57.522 } 00:13:57.522 } 00:13:57.522 }' 00:13:57.522 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.522 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:57.522 BaseBdev2 00:13:57.522 BaseBdev3' 00:13:57.522 04:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.522 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.780 [2024-11-27 04:35:45.182760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.780 [2024-11-27 04:35:45.182927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.780 [2024-11-27 04:35:45.183150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.780 [2024-11-27 04:35:45.183324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.780 [2024-11-27 04:35:45.183459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65755 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65755 ']' 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65755 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65755 00:13:57.780 killing process with pid 65755 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65755' 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65755 00:13:57.780 [2024-11-27 04:35:45.219857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.780 04:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65755 00:13:58.037 [2024-11-27 04:35:45.491329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.968 04:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:58.968 00:13:58.968 real 0m11.768s 00:13:58.968 user 0m19.555s 00:13:58.968 sys 0m1.581s 00:13:58.968 04:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.968 04:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.968 ************************************ 00:13:58.968 END TEST raid_state_function_test 00:13:58.968 ************************************ 00:13:58.968 04:35:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:13:58.968 04:35:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:58.968 04:35:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.968 04:35:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.968 ************************************ 00:13:58.968 START TEST raid_state_function_test_sb 00:13:58.968 ************************************ 00:13:58.968 04:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:13:58.968 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:58.968 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:58.969 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:59.226 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:59.226 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:59.227 Process raid pid: 66388 00:13:59.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66388 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66388' 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66388 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66388 ']' 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.227 04:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.227 [2024-11-27 04:35:46.693394] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:59.227 [2024-11-27 04:35:46.693836] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.484 [2024-11-27 04:35:46.898855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.484 [2024-11-27 04:35:47.031044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.742 [2024-11-27 04:35:47.239621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.742 [2024-11-27 04:35:47.239674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.322 [2024-11-27 04:35:47.648980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.322 [2024-11-27 04:35:47.649163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.322 [2024-11-27 04:35:47.649339] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.322 [2024-11-27 04:35:47.649486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.322 [2024-11-27 04:35:47.649620] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.322 [2024-11-27 04:35:47.649742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.322 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.322 "name": "Existed_Raid", 00:14:00.322 "uuid": "657558af-6282-4c0f-bb78-1310ad3a049b", 00:14:00.322 "strip_size_kb": 64, 00:14:00.322 "state": "configuring", 00:14:00.322 "raid_level": "concat", 00:14:00.322 "superblock": true, 00:14:00.322 "num_base_bdevs": 3, 00:14:00.322 "num_base_bdevs_discovered": 0, 00:14:00.323 "num_base_bdevs_operational": 3, 00:14:00.323 "base_bdevs_list": [ 00:14:00.323 { 00:14:00.323 "name": "BaseBdev1", 00:14:00.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.323 "is_configured": false, 00:14:00.323 "data_offset": 0, 00:14:00.323 "data_size": 0 00:14:00.323 }, 00:14:00.323 { 00:14:00.323 "name": "BaseBdev2", 00:14:00.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.323 "is_configured": false, 00:14:00.323 "data_offset": 0, 00:14:00.323 "data_size": 0 00:14:00.323 }, 00:14:00.323 { 00:14:00.323 "name": "BaseBdev3", 00:14:00.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.323 "is_configured": false, 00:14:00.323 "data_offset": 0, 00:14:00.323 "data_size": 0 00:14:00.323 } 00:14:00.323 ] 00:14:00.323 }' 00:14:00.323 04:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.323 04:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.580 [2024-11-27 04:35:48.157102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.580 [2024-11-27 04:35:48.157160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.580 [2024-11-27 04:35:48.165094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.580 [2024-11-27 04:35:48.165284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.580 [2024-11-27 04:35:48.165435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.580 [2024-11-27 04:35:48.165587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.580 [2024-11-27 04:35:48.165726] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.580 [2024-11-27 04:35:48.165894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.580 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.881 [2024-11-27 04:35:48.209961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.881 BaseBdev1 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.881 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.881 [ 00:14:00.881 { 00:14:00.881 "name": "BaseBdev1", 00:14:00.881 "aliases": [ 00:14:00.881 "03e5ae0f-6eec-43c2-beff-54f371d1e8e8" 00:14:00.881 ], 00:14:00.881 "product_name": "Malloc disk", 00:14:00.881 "block_size": 512, 00:14:00.881 "num_blocks": 65536, 00:14:00.881 "uuid": "03e5ae0f-6eec-43c2-beff-54f371d1e8e8", 00:14:00.881 "assigned_rate_limits": { 00:14:00.881 "rw_ios_per_sec": 0, 00:14:00.881 "rw_mbytes_per_sec": 0, 00:14:00.881 "r_mbytes_per_sec": 0, 00:14:00.881 "w_mbytes_per_sec": 0 00:14:00.881 }, 00:14:00.881 "claimed": true, 00:14:00.881 "claim_type": "exclusive_write", 00:14:00.881 "zoned": false, 00:14:00.881 "supported_io_types": { 00:14:00.881 "read": true, 00:14:00.881 "write": true, 00:14:00.881 "unmap": true, 00:14:00.881 "flush": true, 00:14:00.881 "reset": true, 00:14:00.881 "nvme_admin": false, 00:14:00.881 "nvme_io": false, 00:14:00.881 "nvme_io_md": false, 00:14:00.881 "write_zeroes": true, 00:14:00.881 "zcopy": true, 00:14:00.881 "get_zone_info": false, 00:14:00.881 "zone_management": false, 00:14:00.881 "zone_append": false, 00:14:00.881 "compare": false, 00:14:00.881 "compare_and_write": false, 00:14:00.881 "abort": true, 00:14:00.881 "seek_hole": false, 00:14:00.882 "seek_data": false, 00:14:00.882 "copy": true, 00:14:00.882 "nvme_iov_md": false 00:14:00.882 }, 00:14:00.882 "memory_domains": [ 00:14:00.882 { 00:14:00.882 "dma_device_id": "system", 00:14:00.882 "dma_device_type": 1 00:14:00.882 }, 00:14:00.882 { 00:14:00.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.882 "dma_device_type": 2 00:14:00.882 } 00:14:00.882 ], 00:14:00.882 "driver_specific": {} 00:14:00.882 } 00:14:00.882 ] 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.882 "name": "Existed_Raid", 00:14:00.882 "uuid": "99bd90c5-8cd8-449c-b496-4cd470ccec86", 00:14:00.882 "strip_size_kb": 64, 00:14:00.882 "state": "configuring", 00:14:00.882 "raid_level": "concat", 00:14:00.882 "superblock": true, 00:14:00.882 "num_base_bdevs": 3, 00:14:00.882 "num_base_bdevs_discovered": 1, 00:14:00.882 "num_base_bdevs_operational": 3, 00:14:00.882 "base_bdevs_list": [ 00:14:00.882 { 00:14:00.882 "name": "BaseBdev1", 00:14:00.882 "uuid": "03e5ae0f-6eec-43c2-beff-54f371d1e8e8", 00:14:00.882 "is_configured": true, 00:14:00.882 "data_offset": 2048, 00:14:00.882 "data_size": 63488 00:14:00.882 }, 00:14:00.882 { 00:14:00.882 "name": "BaseBdev2", 00:14:00.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.882 "is_configured": false, 00:14:00.882 "data_offset": 0, 00:14:00.882 "data_size": 0 00:14:00.882 }, 00:14:00.882 { 00:14:00.882 "name": "BaseBdev3", 00:14:00.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.882 "is_configured": false, 00:14:00.882 "data_offset": 0, 00:14:00.882 "data_size": 0 00:14:00.882 } 00:14:00.882 ] 00:14:00.882 }' 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.882 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.201 [2024-11-27 04:35:48.746164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.201 [2024-11-27 04:35:48.746234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.201 [2024-11-27 04:35:48.754260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.201 [2024-11-27 04:35:48.756850] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.201 [2024-11-27 04:35:48.757031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.201 [2024-11-27 04:35:48.757200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.201 [2024-11-27 04:35:48.757324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.201 "name": "Existed_Raid", 00:14:01.201 "uuid": "0b3c819c-9049-46b9-a80f-9fe5b7beb127", 00:14:01.201 "strip_size_kb": 64, 00:14:01.201 "state": "configuring", 00:14:01.201 "raid_level": "concat", 00:14:01.201 "superblock": true, 00:14:01.201 "num_base_bdevs": 3, 00:14:01.201 "num_base_bdevs_discovered": 1, 00:14:01.201 "num_base_bdevs_operational": 3, 00:14:01.201 "base_bdevs_list": [ 00:14:01.201 { 00:14:01.201 "name": "BaseBdev1", 00:14:01.201 "uuid": "03e5ae0f-6eec-43c2-beff-54f371d1e8e8", 00:14:01.201 "is_configured": true, 00:14:01.201 "data_offset": 2048, 00:14:01.201 "data_size": 63488 00:14:01.201 }, 00:14:01.201 { 00:14:01.201 "name": "BaseBdev2", 00:14:01.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.201 "is_configured": false, 00:14:01.201 "data_offset": 0, 00:14:01.201 "data_size": 0 00:14:01.201 }, 00:14:01.201 { 00:14:01.201 "name": "BaseBdev3", 00:14:01.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.201 "is_configured": false, 00:14:01.201 "data_offset": 0, 00:14:01.201 "data_size": 0 00:14:01.201 } 00:14:01.201 ] 00:14:01.201 }' 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.201 04:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.768 [2024-11-27 04:35:49.336565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.768 BaseBdev2 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.768 [ 00:14:01.768 { 00:14:01.768 "name": "BaseBdev2", 00:14:01.768 "aliases": [ 00:14:01.768 "4cf1d31d-fb95-4013-9aee-9d41572a8da5" 00:14:01.768 ], 00:14:01.768 "product_name": "Malloc disk", 00:14:01.768 "block_size": 512, 00:14:01.768 "num_blocks": 65536, 00:14:01.768 "uuid": "4cf1d31d-fb95-4013-9aee-9d41572a8da5", 00:14:01.768 "assigned_rate_limits": { 00:14:01.768 "rw_ios_per_sec": 0, 00:14:01.768 "rw_mbytes_per_sec": 0, 00:14:01.768 "r_mbytes_per_sec": 0, 00:14:01.768 "w_mbytes_per_sec": 0 00:14:01.768 }, 00:14:01.768 "claimed": true, 00:14:01.768 "claim_type": "exclusive_write", 00:14:01.768 "zoned": false, 00:14:01.768 "supported_io_types": { 00:14:01.768 "read": true, 00:14:01.768 "write": true, 00:14:01.768 "unmap": true, 00:14:01.768 "flush": true, 00:14:01.768 "reset": true, 00:14:01.768 "nvme_admin": false, 00:14:01.768 "nvme_io": false, 00:14:01.768 "nvme_io_md": false, 00:14:01.768 "write_zeroes": true, 00:14:01.768 "zcopy": true, 00:14:01.768 "get_zone_info": false, 00:14:01.768 "zone_management": false, 00:14:01.768 "zone_append": false, 00:14:01.768 "compare": false, 00:14:01.768 "compare_and_write": false, 00:14:01.768 "abort": true, 00:14:01.768 "seek_hole": false, 00:14:01.768 "seek_data": false, 00:14:01.768 "copy": true, 00:14:01.768 "nvme_iov_md": false 00:14:01.768 }, 00:14:01.768 "memory_domains": [ 00:14:01.768 { 00:14:01.768 "dma_device_id": "system", 00:14:01.768 "dma_device_type": 1 00:14:01.768 }, 00:14:01.768 { 00:14:01.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.768 "dma_device_type": 2 00:14:01.768 } 00:14:01.768 ], 00:14:01.768 "driver_specific": {} 00:14:01.768 } 00:14:01.768 ] 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.768 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.026 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.026 "name": "Existed_Raid", 00:14:02.026 "uuid": "0b3c819c-9049-46b9-a80f-9fe5b7beb127", 00:14:02.026 "strip_size_kb": 64, 00:14:02.026 "state": "configuring", 00:14:02.026 "raid_level": "concat", 00:14:02.026 "superblock": true, 00:14:02.026 "num_base_bdevs": 3, 00:14:02.026 "num_base_bdevs_discovered": 2, 00:14:02.026 "num_base_bdevs_operational": 3, 00:14:02.026 "base_bdevs_list": [ 00:14:02.026 { 00:14:02.026 "name": "BaseBdev1", 00:14:02.026 "uuid": "03e5ae0f-6eec-43c2-beff-54f371d1e8e8", 00:14:02.026 "is_configured": true, 00:14:02.026 "data_offset": 2048, 00:14:02.026 "data_size": 63488 00:14:02.026 }, 00:14:02.026 { 00:14:02.026 "name": "BaseBdev2", 00:14:02.026 "uuid": "4cf1d31d-fb95-4013-9aee-9d41572a8da5", 00:14:02.026 "is_configured": true, 00:14:02.026 "data_offset": 2048, 00:14:02.026 "data_size": 63488 00:14:02.026 }, 00:14:02.026 { 00:14:02.026 "name": "BaseBdev3", 00:14:02.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.026 "is_configured": false, 00:14:02.026 "data_offset": 0, 00:14:02.026 "data_size": 0 00:14:02.026 } 00:14:02.026 ] 00:14:02.026 }' 00:14:02.026 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.026 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.285 BaseBdev3 00:14:02.285 [2024-11-27 04:35:49.897057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.285 [2024-11-27 04:35:49.897474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:02.285 [2024-11-27 04:35:49.897520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:02.285 [2024-11-27 04:35:49.898114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.285 [2024-11-27 04:35:49.898470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:02.285 [2024-11-27 04:35:49.898511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:02.285 [2024-11-27 04:35:49.898863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.285 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.544 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.544 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:02.544 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.544 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.544 [ 00:14:02.544 { 00:14:02.544 "name": "BaseBdev3", 00:14:02.544 "aliases": [ 00:14:02.544 "824dc081-3678-4aa4-92fa-981977ecff26" 00:14:02.544 ], 00:14:02.544 "product_name": "Malloc disk", 00:14:02.544 "block_size": 512, 00:14:02.544 "num_blocks": 65536, 00:14:02.544 "uuid": "824dc081-3678-4aa4-92fa-981977ecff26", 00:14:02.544 "assigned_rate_limits": { 00:14:02.544 "rw_ios_per_sec": 0, 00:14:02.544 "rw_mbytes_per_sec": 0, 00:14:02.544 "r_mbytes_per_sec": 0, 00:14:02.544 "w_mbytes_per_sec": 0 00:14:02.544 }, 00:14:02.544 "claimed": true, 00:14:02.544 "claim_type": "exclusive_write", 00:14:02.545 "zoned": false, 00:14:02.545 "supported_io_types": { 00:14:02.545 "read": true, 00:14:02.545 "write": true, 00:14:02.545 "unmap": true, 00:14:02.545 "flush": true, 00:14:02.545 "reset": true, 00:14:02.545 "nvme_admin": false, 00:14:02.545 "nvme_io": false, 00:14:02.545 "nvme_io_md": false, 00:14:02.545 "write_zeroes": true, 00:14:02.545 "zcopy": true, 00:14:02.545 "get_zone_info": false, 00:14:02.545 "zone_management": false, 00:14:02.545 "zone_append": false, 00:14:02.545 "compare": false, 00:14:02.545 "compare_and_write": false, 00:14:02.545 "abort": true, 00:14:02.545 "seek_hole": false, 00:14:02.545 "seek_data": false, 00:14:02.545 "copy": true, 00:14:02.545 "nvme_iov_md": false 00:14:02.545 }, 00:14:02.545 "memory_domains": [ 00:14:02.545 { 00:14:02.545 "dma_device_id": "system", 00:14:02.545 "dma_device_type": 1 00:14:02.545 }, 00:14:02.545 { 00:14:02.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.545 "dma_device_type": 2 00:14:02.545 } 00:14:02.545 ], 00:14:02.545 "driver_specific": {} 00:14:02.545 } 00:14:02.545 ] 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.545 "name": "Existed_Raid", 00:14:02.545 "uuid": "0b3c819c-9049-46b9-a80f-9fe5b7beb127", 00:14:02.545 "strip_size_kb": 64, 00:14:02.545 "state": "online", 00:14:02.545 "raid_level": "concat", 00:14:02.545 "superblock": true, 00:14:02.545 "num_base_bdevs": 3, 00:14:02.545 "num_base_bdevs_discovered": 3, 00:14:02.545 "num_base_bdevs_operational": 3, 00:14:02.545 "base_bdevs_list": [ 00:14:02.545 { 00:14:02.545 "name": "BaseBdev1", 00:14:02.545 "uuid": "03e5ae0f-6eec-43c2-beff-54f371d1e8e8", 00:14:02.545 "is_configured": true, 00:14:02.545 "data_offset": 2048, 00:14:02.545 "data_size": 63488 00:14:02.545 }, 00:14:02.545 { 00:14:02.545 "name": "BaseBdev2", 00:14:02.545 "uuid": "4cf1d31d-fb95-4013-9aee-9d41572a8da5", 00:14:02.545 "is_configured": true, 00:14:02.545 "data_offset": 2048, 00:14:02.545 "data_size": 63488 00:14:02.545 }, 00:14:02.545 { 00:14:02.545 "name": "BaseBdev3", 00:14:02.545 "uuid": "824dc081-3678-4aa4-92fa-981977ecff26", 00:14:02.545 "is_configured": true, 00:14:02.545 "data_offset": 2048, 00:14:02.545 "data_size": 63488 00:14:02.545 } 00:14:02.545 ] 00:14:02.545 }' 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.545 04:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.112 [2024-11-27 04:35:50.441699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.112 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.112 "name": "Existed_Raid", 00:14:03.112 "aliases": [ 00:14:03.112 "0b3c819c-9049-46b9-a80f-9fe5b7beb127" 00:14:03.112 ], 00:14:03.112 "product_name": "Raid Volume", 00:14:03.112 "block_size": 512, 00:14:03.112 "num_blocks": 190464, 00:14:03.112 "uuid": "0b3c819c-9049-46b9-a80f-9fe5b7beb127", 00:14:03.112 "assigned_rate_limits": { 00:14:03.112 "rw_ios_per_sec": 0, 00:14:03.112 "rw_mbytes_per_sec": 0, 00:14:03.112 "r_mbytes_per_sec": 0, 00:14:03.112 "w_mbytes_per_sec": 0 00:14:03.112 }, 00:14:03.112 "claimed": false, 00:14:03.112 "zoned": false, 00:14:03.112 "supported_io_types": { 00:14:03.112 "read": true, 00:14:03.112 "write": true, 00:14:03.112 "unmap": true, 00:14:03.112 "flush": true, 00:14:03.112 "reset": true, 00:14:03.112 "nvme_admin": false, 00:14:03.112 "nvme_io": false, 00:14:03.112 "nvme_io_md": false, 00:14:03.112 "write_zeroes": true, 00:14:03.112 "zcopy": false, 00:14:03.112 "get_zone_info": false, 00:14:03.112 "zone_management": false, 00:14:03.112 "zone_append": false, 00:14:03.112 "compare": false, 00:14:03.112 "compare_and_write": false, 00:14:03.112 "abort": false, 00:14:03.112 "seek_hole": false, 00:14:03.112 "seek_data": false, 00:14:03.112 "copy": false, 00:14:03.112 "nvme_iov_md": false 00:14:03.112 }, 00:14:03.112 "memory_domains": [ 00:14:03.112 { 00:14:03.112 "dma_device_id": "system", 00:14:03.112 "dma_device_type": 1 00:14:03.112 }, 00:14:03.112 { 00:14:03.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.112 "dma_device_type": 2 00:14:03.112 }, 00:14:03.112 { 00:14:03.112 "dma_device_id": "system", 00:14:03.112 "dma_device_type": 1 00:14:03.112 }, 00:14:03.112 { 00:14:03.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.112 "dma_device_type": 2 00:14:03.112 }, 00:14:03.112 { 00:14:03.112 "dma_device_id": "system", 00:14:03.112 "dma_device_type": 1 00:14:03.112 }, 00:14:03.112 { 00:14:03.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.112 "dma_device_type": 2 00:14:03.112 } 00:14:03.112 ], 00:14:03.112 "driver_specific": { 00:14:03.112 "raid": { 00:14:03.112 "uuid": "0b3c819c-9049-46b9-a80f-9fe5b7beb127", 00:14:03.112 "strip_size_kb": 64, 00:14:03.112 "state": "online", 00:14:03.112 "raid_level": "concat", 00:14:03.112 "superblock": true, 00:14:03.112 "num_base_bdevs": 3, 00:14:03.112 "num_base_bdevs_discovered": 3, 00:14:03.112 "num_base_bdevs_operational": 3, 00:14:03.112 "base_bdevs_list": [ 00:14:03.112 { 00:14:03.112 "name": "BaseBdev1", 00:14:03.112 "uuid": "03e5ae0f-6eec-43c2-beff-54f371d1e8e8", 00:14:03.112 "is_configured": true, 00:14:03.112 "data_offset": 2048, 00:14:03.112 "data_size": 63488 00:14:03.112 }, 00:14:03.112 { 00:14:03.112 "name": "BaseBdev2", 00:14:03.113 "uuid": "4cf1d31d-fb95-4013-9aee-9d41572a8da5", 00:14:03.113 "is_configured": true, 00:14:03.113 "data_offset": 2048, 00:14:03.113 "data_size": 63488 00:14:03.113 }, 00:14:03.113 { 00:14:03.113 "name": "BaseBdev3", 00:14:03.113 "uuid": "824dc081-3678-4aa4-92fa-981977ecff26", 00:14:03.113 "is_configured": true, 00:14:03.113 "data_offset": 2048, 00:14:03.113 "data_size": 63488 00:14:03.113 } 00:14:03.113 ] 00:14:03.113 } 00:14:03.113 } 00:14:03.113 }' 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:03.113 BaseBdev2 00:14:03.113 BaseBdev3' 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.113 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.371 [2024-11-27 04:35:50.749459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:03.371 [2024-11-27 04:35:50.749501] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.371 [2024-11-27 04:35:50.749580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.371 "name": "Existed_Raid", 00:14:03.371 "uuid": "0b3c819c-9049-46b9-a80f-9fe5b7beb127", 00:14:03.371 "strip_size_kb": 64, 00:14:03.371 "state": "offline", 00:14:03.371 "raid_level": "concat", 00:14:03.371 "superblock": true, 00:14:03.371 "num_base_bdevs": 3, 00:14:03.371 "num_base_bdevs_discovered": 2, 00:14:03.371 "num_base_bdevs_operational": 2, 00:14:03.371 "base_bdevs_list": [ 00:14:03.371 { 00:14:03.371 "name": null, 00:14:03.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.371 "is_configured": false, 00:14:03.371 "data_offset": 0, 00:14:03.371 "data_size": 63488 00:14:03.371 }, 00:14:03.371 { 00:14:03.371 "name": "BaseBdev2", 00:14:03.371 "uuid": "4cf1d31d-fb95-4013-9aee-9d41572a8da5", 00:14:03.371 "is_configured": true, 00:14:03.371 "data_offset": 2048, 00:14:03.371 "data_size": 63488 00:14:03.371 }, 00:14:03.371 { 00:14:03.371 "name": "BaseBdev3", 00:14:03.371 "uuid": "824dc081-3678-4aa4-92fa-981977ecff26", 00:14:03.371 "is_configured": true, 00:14:03.371 "data_offset": 2048, 00:14:03.371 "data_size": 63488 00:14:03.371 } 00:14:03.371 ] 00:14:03.371 }' 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.371 04:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.938 [2024-11-27 04:35:51.412607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.938 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.196 [2024-11-27 04:35:51.559182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:04.196 [2024-11-27 04:35:51.559253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.196 BaseBdev2 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.196 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.196 [ 00:14:04.196 { 00:14:04.196 "name": "BaseBdev2", 00:14:04.196 "aliases": [ 00:14:04.196 "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9" 00:14:04.196 ], 00:14:04.196 "product_name": "Malloc disk", 00:14:04.196 "block_size": 512, 00:14:04.196 "num_blocks": 65536, 00:14:04.196 "uuid": "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9", 00:14:04.196 "assigned_rate_limits": { 00:14:04.196 "rw_ios_per_sec": 0, 00:14:04.196 "rw_mbytes_per_sec": 0, 00:14:04.196 "r_mbytes_per_sec": 0, 00:14:04.196 "w_mbytes_per_sec": 0 00:14:04.196 }, 00:14:04.196 "claimed": false, 00:14:04.196 "zoned": false, 00:14:04.196 "supported_io_types": { 00:14:04.196 "read": true, 00:14:04.196 "write": true, 00:14:04.196 "unmap": true, 00:14:04.196 "flush": true, 00:14:04.196 "reset": true, 00:14:04.196 "nvme_admin": false, 00:14:04.196 "nvme_io": false, 00:14:04.196 "nvme_io_md": false, 00:14:04.197 "write_zeroes": true, 00:14:04.197 "zcopy": true, 00:14:04.197 "get_zone_info": false, 00:14:04.197 "zone_management": false, 00:14:04.197 "zone_append": false, 00:14:04.197 "compare": false, 00:14:04.197 "compare_and_write": false, 00:14:04.197 "abort": true, 00:14:04.197 "seek_hole": false, 00:14:04.197 "seek_data": false, 00:14:04.197 "copy": true, 00:14:04.197 "nvme_iov_md": false 00:14:04.197 }, 00:14:04.197 "memory_domains": [ 00:14:04.197 { 00:14:04.197 "dma_device_id": "system", 00:14:04.197 "dma_device_type": 1 00:14:04.197 }, 00:14:04.197 { 00:14:04.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.197 "dma_device_type": 2 00:14:04.197 } 00:14:04.197 ], 00:14:04.197 "driver_specific": {} 00:14:04.197 } 00:14:04.197 ] 00:14:04.197 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.197 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:04.197 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:04.197 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.197 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:04.197 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.197 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.455 BaseBdev3 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.455 [ 00:14:04.455 { 00:14:04.455 "name": "BaseBdev3", 00:14:04.455 "aliases": [ 00:14:04.455 "f096a804-42d7-413b-aa6e-a240b7bf97d4" 00:14:04.455 ], 00:14:04.455 "product_name": "Malloc disk", 00:14:04.455 "block_size": 512, 00:14:04.455 "num_blocks": 65536, 00:14:04.455 "uuid": "f096a804-42d7-413b-aa6e-a240b7bf97d4", 00:14:04.455 "assigned_rate_limits": { 00:14:04.455 "rw_ios_per_sec": 0, 00:14:04.455 "rw_mbytes_per_sec": 0, 00:14:04.455 "r_mbytes_per_sec": 0, 00:14:04.455 "w_mbytes_per_sec": 0 00:14:04.455 }, 00:14:04.455 "claimed": false, 00:14:04.455 "zoned": false, 00:14:04.455 "supported_io_types": { 00:14:04.455 "read": true, 00:14:04.455 "write": true, 00:14:04.455 "unmap": true, 00:14:04.455 "flush": true, 00:14:04.455 "reset": true, 00:14:04.455 "nvme_admin": false, 00:14:04.455 "nvme_io": false, 00:14:04.455 "nvme_io_md": false, 00:14:04.455 "write_zeroes": true, 00:14:04.455 "zcopy": true, 00:14:04.455 "get_zone_info": false, 00:14:04.455 "zone_management": false, 00:14:04.455 "zone_append": false, 00:14:04.455 "compare": false, 00:14:04.455 "compare_and_write": false, 00:14:04.455 "abort": true, 00:14:04.455 "seek_hole": false, 00:14:04.455 "seek_data": false, 00:14:04.455 "copy": true, 00:14:04.455 "nvme_iov_md": false 00:14:04.455 }, 00:14:04.455 "memory_domains": [ 00:14:04.455 { 00:14:04.455 "dma_device_id": "system", 00:14:04.455 "dma_device_type": 1 00:14:04.455 }, 00:14:04.455 { 00:14:04.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.455 "dma_device_type": 2 00:14:04.455 } 00:14:04.455 ], 00:14:04.455 "driver_specific": {} 00:14:04.455 } 00:14:04.455 ] 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.455 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.455 [2024-11-27 04:35:51.861661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:04.456 [2024-11-27 04:35:51.861719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:04.456 [2024-11-27 04:35:51.861751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.456 [2024-11-27 04:35:51.864130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.456 "name": "Existed_Raid", 00:14:04.456 "uuid": "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa", 00:14:04.456 "strip_size_kb": 64, 00:14:04.456 "state": "configuring", 00:14:04.456 "raid_level": "concat", 00:14:04.456 "superblock": true, 00:14:04.456 "num_base_bdevs": 3, 00:14:04.456 "num_base_bdevs_discovered": 2, 00:14:04.456 "num_base_bdevs_operational": 3, 00:14:04.456 "base_bdevs_list": [ 00:14:04.456 { 00:14:04.456 "name": "BaseBdev1", 00:14:04.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.456 "is_configured": false, 00:14:04.456 "data_offset": 0, 00:14:04.456 "data_size": 0 00:14:04.456 }, 00:14:04.456 { 00:14:04.456 "name": "BaseBdev2", 00:14:04.456 "uuid": "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9", 00:14:04.456 "is_configured": true, 00:14:04.456 "data_offset": 2048, 00:14:04.456 "data_size": 63488 00:14:04.456 }, 00:14:04.456 { 00:14:04.456 "name": "BaseBdev3", 00:14:04.456 "uuid": "f096a804-42d7-413b-aa6e-a240b7bf97d4", 00:14:04.456 "is_configured": true, 00:14:04.456 "data_offset": 2048, 00:14:04.456 "data_size": 63488 00:14:04.456 } 00:14:04.456 ] 00:14:04.456 }' 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.456 04:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.021 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.022 [2024-11-27 04:35:52.401872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.022 "name": "Existed_Raid", 00:14:05.022 "uuid": "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa", 00:14:05.022 "strip_size_kb": 64, 00:14:05.022 "state": "configuring", 00:14:05.022 "raid_level": "concat", 00:14:05.022 "superblock": true, 00:14:05.022 "num_base_bdevs": 3, 00:14:05.022 "num_base_bdevs_discovered": 1, 00:14:05.022 "num_base_bdevs_operational": 3, 00:14:05.022 "base_bdevs_list": [ 00:14:05.022 { 00:14:05.022 "name": "BaseBdev1", 00:14:05.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.022 "is_configured": false, 00:14:05.022 "data_offset": 0, 00:14:05.022 "data_size": 0 00:14:05.022 }, 00:14:05.022 { 00:14:05.022 "name": null, 00:14:05.022 "uuid": "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9", 00:14:05.022 "is_configured": false, 00:14:05.022 "data_offset": 0, 00:14:05.022 "data_size": 63488 00:14:05.022 }, 00:14:05.022 { 00:14:05.022 "name": "BaseBdev3", 00:14:05.022 "uuid": "f096a804-42d7-413b-aa6e-a240b7bf97d4", 00:14:05.022 "is_configured": true, 00:14:05.022 "data_offset": 2048, 00:14:05.022 "data_size": 63488 00:14:05.022 } 00:14:05.022 ] 00:14:05.022 }' 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.022 04:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.588 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.588 04:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:05.588 04:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.588 04:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.588 04:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.588 [2024-11-27 04:35:53.056706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.588 BaseBdev1 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.588 [ 00:14:05.588 { 00:14:05.588 "name": "BaseBdev1", 00:14:05.588 "aliases": [ 00:14:05.588 "b2e61886-7b02-401d-b723-9c9f416e6492" 00:14:05.588 ], 00:14:05.588 "product_name": "Malloc disk", 00:14:05.588 "block_size": 512, 00:14:05.588 "num_blocks": 65536, 00:14:05.588 "uuid": "b2e61886-7b02-401d-b723-9c9f416e6492", 00:14:05.588 "assigned_rate_limits": { 00:14:05.588 "rw_ios_per_sec": 0, 00:14:05.588 "rw_mbytes_per_sec": 0, 00:14:05.588 "r_mbytes_per_sec": 0, 00:14:05.588 "w_mbytes_per_sec": 0 00:14:05.588 }, 00:14:05.588 "claimed": true, 00:14:05.588 "claim_type": "exclusive_write", 00:14:05.588 "zoned": false, 00:14:05.588 "supported_io_types": { 00:14:05.588 "read": true, 00:14:05.588 "write": true, 00:14:05.588 "unmap": true, 00:14:05.588 "flush": true, 00:14:05.588 "reset": true, 00:14:05.588 "nvme_admin": false, 00:14:05.588 "nvme_io": false, 00:14:05.588 "nvme_io_md": false, 00:14:05.588 "write_zeroes": true, 00:14:05.588 "zcopy": true, 00:14:05.588 "get_zone_info": false, 00:14:05.588 "zone_management": false, 00:14:05.588 "zone_append": false, 00:14:05.588 "compare": false, 00:14:05.588 "compare_and_write": false, 00:14:05.588 "abort": true, 00:14:05.588 "seek_hole": false, 00:14:05.588 "seek_data": false, 00:14:05.588 "copy": true, 00:14:05.588 "nvme_iov_md": false 00:14:05.588 }, 00:14:05.588 "memory_domains": [ 00:14:05.588 { 00:14:05.588 "dma_device_id": "system", 00:14:05.588 "dma_device_type": 1 00:14:05.588 }, 00:14:05.588 { 00:14:05.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.588 "dma_device_type": 2 00:14:05.588 } 00:14:05.588 ], 00:14:05.588 "driver_specific": {} 00:14:05.588 } 00:14:05.588 ] 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.588 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.588 "name": "Existed_Raid", 00:14:05.588 "uuid": "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa", 00:14:05.589 "strip_size_kb": 64, 00:14:05.589 "state": "configuring", 00:14:05.589 "raid_level": "concat", 00:14:05.589 "superblock": true, 00:14:05.589 "num_base_bdevs": 3, 00:14:05.589 "num_base_bdevs_discovered": 2, 00:14:05.589 "num_base_bdevs_operational": 3, 00:14:05.589 "base_bdevs_list": [ 00:14:05.589 { 00:14:05.589 "name": "BaseBdev1", 00:14:05.589 "uuid": "b2e61886-7b02-401d-b723-9c9f416e6492", 00:14:05.589 "is_configured": true, 00:14:05.589 "data_offset": 2048, 00:14:05.589 "data_size": 63488 00:14:05.589 }, 00:14:05.589 { 00:14:05.589 "name": null, 00:14:05.589 "uuid": "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9", 00:14:05.589 "is_configured": false, 00:14:05.589 "data_offset": 0, 00:14:05.589 "data_size": 63488 00:14:05.589 }, 00:14:05.589 { 00:14:05.589 "name": "BaseBdev3", 00:14:05.589 "uuid": "f096a804-42d7-413b-aa6e-a240b7bf97d4", 00:14:05.589 "is_configured": true, 00:14:05.589 "data_offset": 2048, 00:14:05.589 "data_size": 63488 00:14:05.589 } 00:14:05.589 ] 00:14:05.589 }' 00:14:05.589 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.589 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.156 [2024-11-27 04:35:53.688908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.156 "name": "Existed_Raid", 00:14:06.156 "uuid": "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa", 00:14:06.156 "strip_size_kb": 64, 00:14:06.156 "state": "configuring", 00:14:06.156 "raid_level": "concat", 00:14:06.156 "superblock": true, 00:14:06.156 "num_base_bdevs": 3, 00:14:06.156 "num_base_bdevs_discovered": 1, 00:14:06.156 "num_base_bdevs_operational": 3, 00:14:06.156 "base_bdevs_list": [ 00:14:06.156 { 00:14:06.156 "name": "BaseBdev1", 00:14:06.156 "uuid": "b2e61886-7b02-401d-b723-9c9f416e6492", 00:14:06.156 "is_configured": true, 00:14:06.156 "data_offset": 2048, 00:14:06.156 "data_size": 63488 00:14:06.156 }, 00:14:06.156 { 00:14:06.156 "name": null, 00:14:06.156 "uuid": "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9", 00:14:06.156 "is_configured": false, 00:14:06.156 "data_offset": 0, 00:14:06.156 "data_size": 63488 00:14:06.156 }, 00:14:06.156 { 00:14:06.156 "name": null, 00:14:06.156 "uuid": "f096a804-42d7-413b-aa6e-a240b7bf97d4", 00:14:06.156 "is_configured": false, 00:14:06.156 "data_offset": 0, 00:14:06.156 "data_size": 63488 00:14:06.156 } 00:14:06.156 ] 00:14:06.156 }' 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.156 04:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.723 [2024-11-27 04:35:54.249098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.723 "name": "Existed_Raid", 00:14:06.723 "uuid": "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa", 00:14:06.723 "strip_size_kb": 64, 00:14:06.723 "state": "configuring", 00:14:06.723 "raid_level": "concat", 00:14:06.723 "superblock": true, 00:14:06.723 "num_base_bdevs": 3, 00:14:06.723 "num_base_bdevs_discovered": 2, 00:14:06.723 "num_base_bdevs_operational": 3, 00:14:06.723 "base_bdevs_list": [ 00:14:06.723 { 00:14:06.723 "name": "BaseBdev1", 00:14:06.723 "uuid": "b2e61886-7b02-401d-b723-9c9f416e6492", 00:14:06.723 "is_configured": true, 00:14:06.723 "data_offset": 2048, 00:14:06.723 "data_size": 63488 00:14:06.723 }, 00:14:06.723 { 00:14:06.723 "name": null, 00:14:06.723 "uuid": "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9", 00:14:06.723 "is_configured": false, 00:14:06.723 "data_offset": 0, 00:14:06.723 "data_size": 63488 00:14:06.723 }, 00:14:06.723 { 00:14:06.723 "name": "BaseBdev3", 00:14:06.723 "uuid": "f096a804-42d7-413b-aa6e-a240b7bf97d4", 00:14:06.723 "is_configured": true, 00:14:06.723 "data_offset": 2048, 00:14:06.723 "data_size": 63488 00:14:06.723 } 00:14:06.723 ] 00:14:06.723 }' 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.723 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.289 [2024-11-27 04:35:54.817298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.289 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.548 "name": "Existed_Raid", 00:14:07.548 "uuid": "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa", 00:14:07.548 "strip_size_kb": 64, 00:14:07.548 "state": "configuring", 00:14:07.548 "raid_level": "concat", 00:14:07.548 "superblock": true, 00:14:07.548 "num_base_bdevs": 3, 00:14:07.548 "num_base_bdevs_discovered": 1, 00:14:07.548 "num_base_bdevs_operational": 3, 00:14:07.548 "base_bdevs_list": [ 00:14:07.548 { 00:14:07.548 "name": null, 00:14:07.548 "uuid": "b2e61886-7b02-401d-b723-9c9f416e6492", 00:14:07.548 "is_configured": false, 00:14:07.548 "data_offset": 0, 00:14:07.548 "data_size": 63488 00:14:07.548 }, 00:14:07.548 { 00:14:07.548 "name": null, 00:14:07.548 "uuid": "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9", 00:14:07.548 "is_configured": false, 00:14:07.548 "data_offset": 0, 00:14:07.548 "data_size": 63488 00:14:07.548 }, 00:14:07.548 { 00:14:07.548 "name": "BaseBdev3", 00:14:07.548 "uuid": "f096a804-42d7-413b-aa6e-a240b7bf97d4", 00:14:07.548 "is_configured": true, 00:14:07.548 "data_offset": 2048, 00:14:07.548 "data_size": 63488 00:14:07.548 } 00:14:07.548 ] 00:14:07.548 }' 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.548 04:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.114 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.114 04:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.114 04:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.114 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:08.114 04:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.114 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:08.114 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.115 [2024-11-27 04:35:55.505326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.115 "name": "Existed_Raid", 00:14:08.115 "uuid": "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa", 00:14:08.115 "strip_size_kb": 64, 00:14:08.115 "state": "configuring", 00:14:08.115 "raid_level": "concat", 00:14:08.115 "superblock": true, 00:14:08.115 "num_base_bdevs": 3, 00:14:08.115 "num_base_bdevs_discovered": 2, 00:14:08.115 "num_base_bdevs_operational": 3, 00:14:08.115 "base_bdevs_list": [ 00:14:08.115 { 00:14:08.115 "name": null, 00:14:08.115 "uuid": "b2e61886-7b02-401d-b723-9c9f416e6492", 00:14:08.115 "is_configured": false, 00:14:08.115 "data_offset": 0, 00:14:08.115 "data_size": 63488 00:14:08.115 }, 00:14:08.115 { 00:14:08.115 "name": "BaseBdev2", 00:14:08.115 "uuid": "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9", 00:14:08.115 "is_configured": true, 00:14:08.115 "data_offset": 2048, 00:14:08.115 "data_size": 63488 00:14:08.115 }, 00:14:08.115 { 00:14:08.115 "name": "BaseBdev3", 00:14:08.115 "uuid": "f096a804-42d7-413b-aa6e-a240b7bf97d4", 00:14:08.115 "is_configured": true, 00:14:08.115 "data_offset": 2048, 00:14:08.115 "data_size": 63488 00:14:08.115 } 00:14:08.115 ] 00:14:08.115 }' 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.115 04:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b2e61886-7b02-401d-b723-9c9f416e6492 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.681 [2024-11-27 04:35:56.176991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:08.681 [2024-11-27 04:35:56.177288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:08.681 [2024-11-27 04:35:56.177314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:08.681 [2024-11-27 04:35:56.177622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:08.681 NewBaseBdev 00:14:08.681 [2024-11-27 04:35:56.177841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:08.681 [2024-11-27 04:35:56.177859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:08.681 [2024-11-27 04:35:56.178043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.681 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.682 [ 00:14:08.682 { 00:14:08.682 "name": "NewBaseBdev", 00:14:08.682 "aliases": [ 00:14:08.682 "b2e61886-7b02-401d-b723-9c9f416e6492" 00:14:08.682 ], 00:14:08.682 "product_name": "Malloc disk", 00:14:08.682 "block_size": 512, 00:14:08.682 "num_blocks": 65536, 00:14:08.682 "uuid": "b2e61886-7b02-401d-b723-9c9f416e6492", 00:14:08.682 "assigned_rate_limits": { 00:14:08.682 "rw_ios_per_sec": 0, 00:14:08.682 "rw_mbytes_per_sec": 0, 00:14:08.682 "r_mbytes_per_sec": 0, 00:14:08.682 "w_mbytes_per_sec": 0 00:14:08.682 }, 00:14:08.682 "claimed": true, 00:14:08.682 "claim_type": "exclusive_write", 00:14:08.682 "zoned": false, 00:14:08.682 "supported_io_types": { 00:14:08.682 "read": true, 00:14:08.682 "write": true, 00:14:08.682 "unmap": true, 00:14:08.682 "flush": true, 00:14:08.682 "reset": true, 00:14:08.682 "nvme_admin": false, 00:14:08.682 "nvme_io": false, 00:14:08.682 "nvme_io_md": false, 00:14:08.682 "write_zeroes": true, 00:14:08.682 "zcopy": true, 00:14:08.682 "get_zone_info": false, 00:14:08.682 "zone_management": false, 00:14:08.682 "zone_append": false, 00:14:08.682 "compare": false, 00:14:08.682 "compare_and_write": false, 00:14:08.682 "abort": true, 00:14:08.682 "seek_hole": false, 00:14:08.682 "seek_data": false, 00:14:08.682 "copy": true, 00:14:08.682 "nvme_iov_md": false 00:14:08.682 }, 00:14:08.682 "memory_domains": [ 00:14:08.682 { 00:14:08.682 "dma_device_id": "system", 00:14:08.682 "dma_device_type": 1 00:14:08.682 }, 00:14:08.682 { 00:14:08.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.682 "dma_device_type": 2 00:14:08.682 } 00:14:08.682 ], 00:14:08.682 "driver_specific": {} 00:14:08.682 } 00:14:08.682 ] 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.682 "name": "Existed_Raid", 00:14:08.682 "uuid": "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa", 00:14:08.682 "strip_size_kb": 64, 00:14:08.682 "state": "online", 00:14:08.682 "raid_level": "concat", 00:14:08.682 "superblock": true, 00:14:08.682 "num_base_bdevs": 3, 00:14:08.682 "num_base_bdevs_discovered": 3, 00:14:08.682 "num_base_bdevs_operational": 3, 00:14:08.682 "base_bdevs_list": [ 00:14:08.682 { 00:14:08.682 "name": "NewBaseBdev", 00:14:08.682 "uuid": "b2e61886-7b02-401d-b723-9c9f416e6492", 00:14:08.682 "is_configured": true, 00:14:08.682 "data_offset": 2048, 00:14:08.682 "data_size": 63488 00:14:08.682 }, 00:14:08.682 { 00:14:08.682 "name": "BaseBdev2", 00:14:08.682 "uuid": "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9", 00:14:08.682 "is_configured": true, 00:14:08.682 "data_offset": 2048, 00:14:08.682 "data_size": 63488 00:14:08.682 }, 00:14:08.682 { 00:14:08.682 "name": "BaseBdev3", 00:14:08.682 "uuid": "f096a804-42d7-413b-aa6e-a240b7bf97d4", 00:14:08.682 "is_configured": true, 00:14:08.682 "data_offset": 2048, 00:14:08.682 "data_size": 63488 00:14:08.682 } 00:14:08.682 ] 00:14:08.682 }' 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.682 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:09.253 [2024-11-27 04:35:56.725570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.253 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:09.253 "name": "Existed_Raid", 00:14:09.253 "aliases": [ 00:14:09.253 "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa" 00:14:09.253 ], 00:14:09.253 "product_name": "Raid Volume", 00:14:09.253 "block_size": 512, 00:14:09.253 "num_blocks": 190464, 00:14:09.253 "uuid": "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa", 00:14:09.253 "assigned_rate_limits": { 00:14:09.253 "rw_ios_per_sec": 0, 00:14:09.253 "rw_mbytes_per_sec": 0, 00:14:09.253 "r_mbytes_per_sec": 0, 00:14:09.253 "w_mbytes_per_sec": 0 00:14:09.253 }, 00:14:09.253 "claimed": false, 00:14:09.253 "zoned": false, 00:14:09.253 "supported_io_types": { 00:14:09.253 "read": true, 00:14:09.253 "write": true, 00:14:09.253 "unmap": true, 00:14:09.253 "flush": true, 00:14:09.253 "reset": true, 00:14:09.253 "nvme_admin": false, 00:14:09.253 "nvme_io": false, 00:14:09.253 "nvme_io_md": false, 00:14:09.253 "write_zeroes": true, 00:14:09.253 "zcopy": false, 00:14:09.254 "get_zone_info": false, 00:14:09.254 "zone_management": false, 00:14:09.254 "zone_append": false, 00:14:09.254 "compare": false, 00:14:09.254 "compare_and_write": false, 00:14:09.254 "abort": false, 00:14:09.254 "seek_hole": false, 00:14:09.254 "seek_data": false, 00:14:09.254 "copy": false, 00:14:09.254 "nvme_iov_md": false 00:14:09.254 }, 00:14:09.254 "memory_domains": [ 00:14:09.254 { 00:14:09.254 "dma_device_id": "system", 00:14:09.254 "dma_device_type": 1 00:14:09.254 }, 00:14:09.254 { 00:14:09.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.254 "dma_device_type": 2 00:14:09.254 }, 00:14:09.254 { 00:14:09.254 "dma_device_id": "system", 00:14:09.254 "dma_device_type": 1 00:14:09.254 }, 00:14:09.254 { 00:14:09.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.254 "dma_device_type": 2 00:14:09.254 }, 00:14:09.254 { 00:14:09.254 "dma_device_id": "system", 00:14:09.254 "dma_device_type": 1 00:14:09.254 }, 00:14:09.254 { 00:14:09.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.254 "dma_device_type": 2 00:14:09.254 } 00:14:09.254 ], 00:14:09.254 "driver_specific": { 00:14:09.254 "raid": { 00:14:09.254 "uuid": "7c7b0945-bcdc-49bc-9a31-2e4578dc08aa", 00:14:09.254 "strip_size_kb": 64, 00:14:09.254 "state": "online", 00:14:09.254 "raid_level": "concat", 00:14:09.254 "superblock": true, 00:14:09.254 "num_base_bdevs": 3, 00:14:09.254 "num_base_bdevs_discovered": 3, 00:14:09.254 "num_base_bdevs_operational": 3, 00:14:09.254 "base_bdevs_list": [ 00:14:09.254 { 00:14:09.254 "name": "NewBaseBdev", 00:14:09.254 "uuid": "b2e61886-7b02-401d-b723-9c9f416e6492", 00:14:09.254 "is_configured": true, 00:14:09.254 "data_offset": 2048, 00:14:09.254 "data_size": 63488 00:14:09.254 }, 00:14:09.254 { 00:14:09.254 "name": "BaseBdev2", 00:14:09.254 "uuid": "f1cae4e2-9bc9-42ff-9bc7-d4f899e99fe9", 00:14:09.254 "is_configured": true, 00:14:09.254 "data_offset": 2048, 00:14:09.254 "data_size": 63488 00:14:09.254 }, 00:14:09.254 { 00:14:09.254 "name": "BaseBdev3", 00:14:09.254 "uuid": "f096a804-42d7-413b-aa6e-a240b7bf97d4", 00:14:09.254 "is_configured": true, 00:14:09.254 "data_offset": 2048, 00:14:09.254 "data_size": 63488 00:14:09.254 } 00:14:09.254 ] 00:14:09.254 } 00:14:09.254 } 00:14:09.254 }' 00:14:09.254 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:09.254 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:09.254 BaseBdev2 00:14:09.254 BaseBdev3' 00:14:09.254 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:09.254 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:09.254 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:09.254 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:09.254 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:09.254 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.527 04:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.527 04:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:09.527 04:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:09.527 04:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.527 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.527 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.527 [2024-11-27 04:35:57.025284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.527 [2024-11-27 04:35:57.025321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.527 [2024-11-27 04:35:57.025435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.527 [2024-11-27 04:35:57.025514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.528 [2024-11-27 04:35:57.025537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66388 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66388 ']' 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66388 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66388 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66388' 00:14:09.528 killing process with pid 66388 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66388 00:14:09.528 [2024-11-27 04:35:57.076264] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.528 04:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66388 00:14:09.786 [2024-11-27 04:35:57.343532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.161 04:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:11.161 00:14:11.161 real 0m11.808s 00:14:11.161 user 0m19.627s 00:14:11.161 sys 0m1.561s 00:14:11.161 ************************************ 00:14:11.161 END TEST raid_state_function_test_sb 00:14:11.161 ************************************ 00:14:11.161 04:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.161 04:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 04:35:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:14:11.161 04:35:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:11.161 04:35:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.161 04:35:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 ************************************ 00:14:11.161 START TEST raid_superblock_test 00:14:11.161 ************************************ 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67019 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67019 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67019 ']' 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.161 04:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 [2024-11-27 04:35:58.543082] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:11.161 [2024-11-27 04:35:58.543243] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67019 ] 00:14:11.161 [2024-11-27 04:35:58.715301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.419 [2024-11-27 04:35:58.846203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.677 [2024-11-27 04:35:59.053094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.677 [2024-11-27 04:35:59.053361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.935 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.194 malloc1 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.194 [2024-11-27 04:35:59.565711] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:12.194 [2024-11-27 04:35:59.565811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.194 [2024-11-27 04:35:59.565850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:12.194 [2024-11-27 04:35:59.565869] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.194 [2024-11-27 04:35:59.568744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.194 [2024-11-27 04:35:59.568800] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:12.194 pt1 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.194 malloc2 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.194 [2024-11-27 04:35:59.614450] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:12.194 [2024-11-27 04:35:59.614520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.194 [2024-11-27 04:35:59.614562] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:12.194 [2024-11-27 04:35:59.614577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.194 [2024-11-27 04:35:59.617400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.194 [2024-11-27 04:35:59.617448] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:12.194 pt2 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.194 malloc3 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.194 [2024-11-27 04:35:59.689200] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:12.194 [2024-11-27 04:35:59.689267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.194 [2024-11-27 04:35:59.689302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:12.194 [2024-11-27 04:35:59.689318] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.194 [2024-11-27 04:35:59.692067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.194 [2024-11-27 04:35:59.692250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:12.194 pt3 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:12.194 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.195 [2024-11-27 04:35:59.697273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:12.195 [2024-11-27 04:35:59.699678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:12.195 [2024-11-27 04:35:59.699935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:12.195 [2024-11-27 04:35:59.700154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:12.195 [2024-11-27 04:35:59.700179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:12.195 [2024-11-27 04:35:59.700490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:12.195 [2024-11-27 04:35:59.700697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:12.195 [2024-11-27 04:35:59.700713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:12.195 [2024-11-27 04:35:59.700928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.195 "name": "raid_bdev1", 00:14:12.195 "uuid": "bf7a0f9e-e311-4903-91a7-353548f5f598", 00:14:12.195 "strip_size_kb": 64, 00:14:12.195 "state": "online", 00:14:12.195 "raid_level": "concat", 00:14:12.195 "superblock": true, 00:14:12.195 "num_base_bdevs": 3, 00:14:12.195 "num_base_bdevs_discovered": 3, 00:14:12.195 "num_base_bdevs_operational": 3, 00:14:12.195 "base_bdevs_list": [ 00:14:12.195 { 00:14:12.195 "name": "pt1", 00:14:12.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:12.195 "is_configured": true, 00:14:12.195 "data_offset": 2048, 00:14:12.195 "data_size": 63488 00:14:12.195 }, 00:14:12.195 { 00:14:12.195 "name": "pt2", 00:14:12.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:12.195 "is_configured": true, 00:14:12.195 "data_offset": 2048, 00:14:12.195 "data_size": 63488 00:14:12.195 }, 00:14:12.195 { 00:14:12.195 "name": "pt3", 00:14:12.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:12.195 "is_configured": true, 00:14:12.195 "data_offset": 2048, 00:14:12.195 "data_size": 63488 00:14:12.195 } 00:14:12.195 ] 00:14:12.195 }' 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.195 04:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.762 [2024-11-27 04:36:00.225731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:12.762 "name": "raid_bdev1", 00:14:12.762 "aliases": [ 00:14:12.762 "bf7a0f9e-e311-4903-91a7-353548f5f598" 00:14:12.762 ], 00:14:12.762 "product_name": "Raid Volume", 00:14:12.762 "block_size": 512, 00:14:12.762 "num_blocks": 190464, 00:14:12.762 "uuid": "bf7a0f9e-e311-4903-91a7-353548f5f598", 00:14:12.762 "assigned_rate_limits": { 00:14:12.762 "rw_ios_per_sec": 0, 00:14:12.762 "rw_mbytes_per_sec": 0, 00:14:12.762 "r_mbytes_per_sec": 0, 00:14:12.762 "w_mbytes_per_sec": 0 00:14:12.762 }, 00:14:12.762 "claimed": false, 00:14:12.762 "zoned": false, 00:14:12.762 "supported_io_types": { 00:14:12.762 "read": true, 00:14:12.762 "write": true, 00:14:12.762 "unmap": true, 00:14:12.762 "flush": true, 00:14:12.762 "reset": true, 00:14:12.762 "nvme_admin": false, 00:14:12.762 "nvme_io": false, 00:14:12.762 "nvme_io_md": false, 00:14:12.762 "write_zeroes": true, 00:14:12.762 "zcopy": false, 00:14:12.762 "get_zone_info": false, 00:14:12.762 "zone_management": false, 00:14:12.762 "zone_append": false, 00:14:12.762 "compare": false, 00:14:12.762 "compare_and_write": false, 00:14:12.762 "abort": false, 00:14:12.762 "seek_hole": false, 00:14:12.762 "seek_data": false, 00:14:12.762 "copy": false, 00:14:12.762 "nvme_iov_md": false 00:14:12.762 }, 00:14:12.762 "memory_domains": [ 00:14:12.762 { 00:14:12.762 "dma_device_id": "system", 00:14:12.762 "dma_device_type": 1 00:14:12.762 }, 00:14:12.762 { 00:14:12.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.762 "dma_device_type": 2 00:14:12.762 }, 00:14:12.762 { 00:14:12.762 "dma_device_id": "system", 00:14:12.762 "dma_device_type": 1 00:14:12.762 }, 00:14:12.762 { 00:14:12.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.762 "dma_device_type": 2 00:14:12.762 }, 00:14:12.762 { 00:14:12.762 "dma_device_id": "system", 00:14:12.762 "dma_device_type": 1 00:14:12.762 }, 00:14:12.762 { 00:14:12.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.762 "dma_device_type": 2 00:14:12.762 } 00:14:12.762 ], 00:14:12.762 "driver_specific": { 00:14:12.762 "raid": { 00:14:12.762 "uuid": "bf7a0f9e-e311-4903-91a7-353548f5f598", 00:14:12.762 "strip_size_kb": 64, 00:14:12.762 "state": "online", 00:14:12.762 "raid_level": "concat", 00:14:12.762 "superblock": true, 00:14:12.762 "num_base_bdevs": 3, 00:14:12.762 "num_base_bdevs_discovered": 3, 00:14:12.762 "num_base_bdevs_operational": 3, 00:14:12.762 "base_bdevs_list": [ 00:14:12.762 { 00:14:12.762 "name": "pt1", 00:14:12.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:12.762 "is_configured": true, 00:14:12.762 "data_offset": 2048, 00:14:12.762 "data_size": 63488 00:14:12.762 }, 00:14:12.762 { 00:14:12.762 "name": "pt2", 00:14:12.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:12.762 "is_configured": true, 00:14:12.762 "data_offset": 2048, 00:14:12.762 "data_size": 63488 00:14:12.762 }, 00:14:12.762 { 00:14:12.762 "name": "pt3", 00:14:12.762 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:12.762 "is_configured": true, 00:14:12.762 "data_offset": 2048, 00:14:12.762 "data_size": 63488 00:14:12.762 } 00:14:12.762 ] 00:14:12.762 } 00:14:12.762 } 00:14:12.762 }' 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:12.762 pt2 00:14:12.762 pt3' 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.762 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.021 [2024-11-27 04:36:00.533750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bf7a0f9e-e311-4903-91a7-353548f5f598 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bf7a0f9e-e311-4903-91a7-353548f5f598 ']' 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.021 [2024-11-27 04:36:00.581425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.021 [2024-11-27 04:36:00.581580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.021 [2024-11-27 04:36:00.581815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.021 [2024-11-27 04:36:00.582016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.021 [2024-11-27 04:36:00.582225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:13.021 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.022 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.281 [2024-11-27 04:36:00.753520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:13.281 [2024-11-27 04:36:00.756043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:13.281 [2024-11-27 04:36:00.756121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:13.281 [2024-11-27 04:36:00.756200] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:13.281 [2024-11-27 04:36:00.756276] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:13.281 [2024-11-27 04:36:00.756311] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:13.281 [2024-11-27 04:36:00.756340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.281 [2024-11-27 04:36:00.756353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:13.281 request: 00:14:13.281 { 00:14:13.281 "name": "raid_bdev1", 00:14:13.281 "raid_level": "concat", 00:14:13.281 "base_bdevs": [ 00:14:13.281 "malloc1", 00:14:13.281 "malloc2", 00:14:13.281 "malloc3" 00:14:13.281 ], 00:14:13.281 "strip_size_kb": 64, 00:14:13.281 "superblock": false, 00:14:13.281 "method": "bdev_raid_create", 00:14:13.281 "req_id": 1 00:14:13.281 } 00:14:13.281 Got JSON-RPC error response 00:14:13.281 response: 00:14:13.281 { 00:14:13.281 "code": -17, 00:14:13.281 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:13.281 } 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.281 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.282 [2024-11-27 04:36:00.821463] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:13.282 [2024-11-27 04:36:00.821648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.282 [2024-11-27 04:36:00.821728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:13.282 [2024-11-27 04:36:00.821923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.282 [2024-11-27 04:36:00.824824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.282 [2024-11-27 04:36:00.824982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:13.282 [2024-11-27 04:36:00.825199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:13.282 [2024-11-27 04:36:00.825371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:13.282 pt1 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.282 "name": "raid_bdev1", 00:14:13.282 "uuid": "bf7a0f9e-e311-4903-91a7-353548f5f598", 00:14:13.282 "strip_size_kb": 64, 00:14:13.282 "state": "configuring", 00:14:13.282 "raid_level": "concat", 00:14:13.282 "superblock": true, 00:14:13.282 "num_base_bdevs": 3, 00:14:13.282 "num_base_bdevs_discovered": 1, 00:14:13.282 "num_base_bdevs_operational": 3, 00:14:13.282 "base_bdevs_list": [ 00:14:13.282 { 00:14:13.282 "name": "pt1", 00:14:13.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:13.282 "is_configured": true, 00:14:13.282 "data_offset": 2048, 00:14:13.282 "data_size": 63488 00:14:13.282 }, 00:14:13.282 { 00:14:13.282 "name": null, 00:14:13.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:13.282 "is_configured": false, 00:14:13.282 "data_offset": 2048, 00:14:13.282 "data_size": 63488 00:14:13.282 }, 00:14:13.282 { 00:14:13.282 "name": null, 00:14:13.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:13.282 "is_configured": false, 00:14:13.282 "data_offset": 2048, 00:14:13.282 "data_size": 63488 00:14:13.282 } 00:14:13.282 ] 00:14:13.282 }' 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.282 04:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.848 [2024-11-27 04:36:01.373933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:13.848 [2024-11-27 04:36:01.374159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.848 [2024-11-27 04:36:01.374246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:13.848 [2024-11-27 04:36:01.374447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.848 [2024-11-27 04:36:01.375058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.848 [2024-11-27 04:36:01.375209] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:13.848 [2024-11-27 04:36:01.375440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:13.848 [2024-11-27 04:36:01.375605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:13.848 pt2 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.848 [2024-11-27 04:36:01.381896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.848 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.848 "name": "raid_bdev1", 00:14:13.848 "uuid": "bf7a0f9e-e311-4903-91a7-353548f5f598", 00:14:13.848 "strip_size_kb": 64, 00:14:13.848 "state": "configuring", 00:14:13.848 "raid_level": "concat", 00:14:13.848 "superblock": true, 00:14:13.848 "num_base_bdevs": 3, 00:14:13.848 "num_base_bdevs_discovered": 1, 00:14:13.848 "num_base_bdevs_operational": 3, 00:14:13.848 "base_bdevs_list": [ 00:14:13.848 { 00:14:13.848 "name": "pt1", 00:14:13.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:13.848 "is_configured": true, 00:14:13.848 "data_offset": 2048, 00:14:13.848 "data_size": 63488 00:14:13.848 }, 00:14:13.848 { 00:14:13.848 "name": null, 00:14:13.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:13.848 "is_configured": false, 00:14:13.849 "data_offset": 0, 00:14:13.849 "data_size": 63488 00:14:13.849 }, 00:14:13.849 { 00:14:13.849 "name": null, 00:14:13.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:13.849 "is_configured": false, 00:14:13.849 "data_offset": 2048, 00:14:13.849 "data_size": 63488 00:14:13.849 } 00:14:13.849 ] 00:14:13.849 }' 00:14:13.849 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.849 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.415 [2024-11-27 04:36:01.930038] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:14.415 [2024-11-27 04:36:01.930130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.415 [2024-11-27 04:36:01.930163] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:14.415 [2024-11-27 04:36:01.930182] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.415 [2024-11-27 04:36:01.930784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.415 [2024-11-27 04:36:01.930817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:14.415 [2024-11-27 04:36:01.930923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:14.415 [2024-11-27 04:36:01.930962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:14.415 pt2 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.415 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.415 [2024-11-27 04:36:01.942003] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:14.415 [2024-11-27 04:36:01.942211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.415 [2024-11-27 04:36:01.942278] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:14.416 [2024-11-27 04:36:01.942412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.416 [2024-11-27 04:36:01.942929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.416 [2024-11-27 04:36:01.943093] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:14.416 [2024-11-27 04:36:01.943294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:14.416 [2024-11-27 04:36:01.943475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:14.416 [2024-11-27 04:36:01.943824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:14.416 [2024-11-27 04:36:01.943968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:14.416 [2024-11-27 04:36:01.944417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:14.416 [2024-11-27 04:36:01.944750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:14.416 [2024-11-27 04:36:01.944899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:14.416 [2024-11-27 04:36:01.945217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.416 pt3 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.416 04:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.416 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.416 "name": "raid_bdev1", 00:14:14.416 "uuid": "bf7a0f9e-e311-4903-91a7-353548f5f598", 00:14:14.416 "strip_size_kb": 64, 00:14:14.416 "state": "online", 00:14:14.416 "raid_level": "concat", 00:14:14.416 "superblock": true, 00:14:14.416 "num_base_bdevs": 3, 00:14:14.416 "num_base_bdevs_discovered": 3, 00:14:14.416 "num_base_bdevs_operational": 3, 00:14:14.416 "base_bdevs_list": [ 00:14:14.416 { 00:14:14.416 "name": "pt1", 00:14:14.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.416 "is_configured": true, 00:14:14.416 "data_offset": 2048, 00:14:14.416 "data_size": 63488 00:14:14.416 }, 00:14:14.416 { 00:14:14.416 "name": "pt2", 00:14:14.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.416 "is_configured": true, 00:14:14.416 "data_offset": 2048, 00:14:14.416 "data_size": 63488 00:14:14.416 }, 00:14:14.416 { 00:14:14.416 "name": "pt3", 00:14:14.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.416 "is_configured": true, 00:14:14.416 "data_offset": 2048, 00:14:14.416 "data_size": 63488 00:14:14.416 } 00:14:14.416 ] 00:14:14.416 }' 00:14:14.416 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.416 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.982 [2024-11-27 04:36:02.458563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.982 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:14.982 "name": "raid_bdev1", 00:14:14.982 "aliases": [ 00:14:14.982 "bf7a0f9e-e311-4903-91a7-353548f5f598" 00:14:14.982 ], 00:14:14.982 "product_name": "Raid Volume", 00:14:14.982 "block_size": 512, 00:14:14.982 "num_blocks": 190464, 00:14:14.982 "uuid": "bf7a0f9e-e311-4903-91a7-353548f5f598", 00:14:14.982 "assigned_rate_limits": { 00:14:14.982 "rw_ios_per_sec": 0, 00:14:14.982 "rw_mbytes_per_sec": 0, 00:14:14.982 "r_mbytes_per_sec": 0, 00:14:14.982 "w_mbytes_per_sec": 0 00:14:14.982 }, 00:14:14.982 "claimed": false, 00:14:14.982 "zoned": false, 00:14:14.982 "supported_io_types": { 00:14:14.982 "read": true, 00:14:14.982 "write": true, 00:14:14.982 "unmap": true, 00:14:14.982 "flush": true, 00:14:14.982 "reset": true, 00:14:14.982 "nvme_admin": false, 00:14:14.982 "nvme_io": false, 00:14:14.982 "nvme_io_md": false, 00:14:14.982 "write_zeroes": true, 00:14:14.982 "zcopy": false, 00:14:14.982 "get_zone_info": false, 00:14:14.982 "zone_management": false, 00:14:14.982 "zone_append": false, 00:14:14.982 "compare": false, 00:14:14.982 "compare_and_write": false, 00:14:14.982 "abort": false, 00:14:14.982 "seek_hole": false, 00:14:14.982 "seek_data": false, 00:14:14.982 "copy": false, 00:14:14.982 "nvme_iov_md": false 00:14:14.982 }, 00:14:14.982 "memory_domains": [ 00:14:14.982 { 00:14:14.982 "dma_device_id": "system", 00:14:14.982 "dma_device_type": 1 00:14:14.982 }, 00:14:14.982 { 00:14:14.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.982 "dma_device_type": 2 00:14:14.982 }, 00:14:14.982 { 00:14:14.982 "dma_device_id": "system", 00:14:14.982 "dma_device_type": 1 00:14:14.982 }, 00:14:14.982 { 00:14:14.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.982 "dma_device_type": 2 00:14:14.982 }, 00:14:14.982 { 00:14:14.982 "dma_device_id": "system", 00:14:14.982 "dma_device_type": 1 00:14:14.982 }, 00:14:14.982 { 00:14:14.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.982 "dma_device_type": 2 00:14:14.982 } 00:14:14.982 ], 00:14:14.982 "driver_specific": { 00:14:14.982 "raid": { 00:14:14.982 "uuid": "bf7a0f9e-e311-4903-91a7-353548f5f598", 00:14:14.982 "strip_size_kb": 64, 00:14:14.982 "state": "online", 00:14:14.982 "raid_level": "concat", 00:14:14.982 "superblock": true, 00:14:14.982 "num_base_bdevs": 3, 00:14:14.983 "num_base_bdevs_discovered": 3, 00:14:14.983 "num_base_bdevs_operational": 3, 00:14:14.983 "base_bdevs_list": [ 00:14:14.983 { 00:14:14.983 "name": "pt1", 00:14:14.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.983 "is_configured": true, 00:14:14.983 "data_offset": 2048, 00:14:14.983 "data_size": 63488 00:14:14.983 }, 00:14:14.983 { 00:14:14.983 "name": "pt2", 00:14:14.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.983 "is_configured": true, 00:14:14.983 "data_offset": 2048, 00:14:14.983 "data_size": 63488 00:14:14.983 }, 00:14:14.983 { 00:14:14.983 "name": "pt3", 00:14:14.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.983 "is_configured": true, 00:14:14.983 "data_offset": 2048, 00:14:14.983 "data_size": 63488 00:14:14.983 } 00:14:14.983 ] 00:14:14.983 } 00:14:14.983 } 00:14:14.983 }' 00:14:14.983 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:14.983 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:14.983 pt2 00:14:14.983 pt3' 00:14:14.983 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.983 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:14.983 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.242 [2024-11-27 04:36:02.762533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bf7a0f9e-e311-4903-91a7-353548f5f598 '!=' bf7a0f9e-e311-4903-91a7-353548f5f598 ']' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67019 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67019 ']' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67019 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67019 00:14:15.242 killing process with pid 67019 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67019' 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67019 00:14:15.242 [2024-11-27 04:36:02.841793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.242 [2024-11-27 04:36:02.841906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.242 04:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67019 00:14:15.242 [2024-11-27 04:36:02.841986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.242 [2024-11-27 04:36:02.842011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:15.501 [2024-11-27 04:36:03.106419] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:16.876 04:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:16.876 ************************************ 00:14:16.876 END TEST raid_superblock_test 00:14:16.876 ************************************ 00:14:16.876 00:14:16.876 real 0m5.673s 00:14:16.876 user 0m8.602s 00:14:16.876 sys 0m0.787s 00:14:16.876 04:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.876 04:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.876 04:36:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:14:16.876 04:36:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:16.876 04:36:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.876 04:36:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.876 ************************************ 00:14:16.876 START TEST raid_read_error_test 00:14:16.876 ************************************ 00:14:16.876 04:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:14:16.876 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:16.876 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:16.876 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:16.876 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:16.876 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:16.876 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MLfHNL7LW6 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67283 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67283 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67283 ']' 00:14:16.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.877 04:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.877 [2024-11-27 04:36:04.292891] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:16.877 [2024-11-27 04:36:04.293068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67283 ] 00:14:16.877 [2024-11-27 04:36:04.476447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.135 [2024-11-27 04:36:04.607218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.393 [2024-11-27 04:36:04.810863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.393 [2024-11-27 04:36:04.810950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.652 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.652 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:17.652 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:17.652 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:17.652 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.652 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.911 BaseBdev1_malloc 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.911 true 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.911 [2024-11-27 04:36:05.324378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:17.911 [2024-11-27 04:36:05.324586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.911 [2024-11-27 04:36:05.324741] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:17.911 [2024-11-27 04:36:05.324889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.911 [2024-11-27 04:36:05.327832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.911 [2024-11-27 04:36:05.328012] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:17.911 BaseBdev1 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.911 BaseBdev2_malloc 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.911 true 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.911 [2024-11-27 04:36:05.386840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:17.911 [2024-11-27 04:36:05.387040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.911 [2024-11-27 04:36:05.387110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:17.911 [2024-11-27 04:36:05.387236] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.911 [2024-11-27 04:36:05.390054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.911 [2024-11-27 04:36:05.390106] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:17.911 BaseBdev2 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.911 BaseBdev3_malloc 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.911 true 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.911 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.911 [2024-11-27 04:36:05.456752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:17.911 [2024-11-27 04:36:05.456836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.911 [2024-11-27 04:36:05.456864] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:17.911 [2024-11-27 04:36:05.456881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.911 [2024-11-27 04:36:05.459718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.912 [2024-11-27 04:36:05.459787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:17.912 BaseBdev3 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.912 [2024-11-27 04:36:05.464880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.912 [2024-11-27 04:36:05.467331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.912 [2024-11-27 04:36:05.467574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.912 [2024-11-27 04:36:05.467880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:17.912 [2024-11-27 04:36:05.467901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:17.912 [2024-11-27 04:36:05.468232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:17.912 [2024-11-27 04:36:05.468455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:17.912 [2024-11-27 04:36:05.468479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:17.912 [2024-11-27 04:36:05.468663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.912 "name": "raid_bdev1", 00:14:17.912 "uuid": "29d604c2-655e-435b-ab37-9ea91052bd29", 00:14:17.912 "strip_size_kb": 64, 00:14:17.912 "state": "online", 00:14:17.912 "raid_level": "concat", 00:14:17.912 "superblock": true, 00:14:17.912 "num_base_bdevs": 3, 00:14:17.912 "num_base_bdevs_discovered": 3, 00:14:17.912 "num_base_bdevs_operational": 3, 00:14:17.912 "base_bdevs_list": [ 00:14:17.912 { 00:14:17.912 "name": "BaseBdev1", 00:14:17.912 "uuid": "bc78d5b4-6b7b-5e60-888c-6beba19b812f", 00:14:17.912 "is_configured": true, 00:14:17.912 "data_offset": 2048, 00:14:17.912 "data_size": 63488 00:14:17.912 }, 00:14:17.912 { 00:14:17.912 "name": "BaseBdev2", 00:14:17.912 "uuid": "9d4a11e5-1529-56b6-a1cb-648cd19edfff", 00:14:17.912 "is_configured": true, 00:14:17.912 "data_offset": 2048, 00:14:17.912 "data_size": 63488 00:14:17.912 }, 00:14:17.912 { 00:14:17.912 "name": "BaseBdev3", 00:14:17.912 "uuid": "adeaa9b5-7668-5b15-b844-13b6327aa0b3", 00:14:17.912 "is_configured": true, 00:14:17.912 "data_offset": 2048, 00:14:17.912 "data_size": 63488 00:14:17.912 } 00:14:17.912 ] 00:14:17.912 }' 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.912 04:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.479 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:18.479 04:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:18.479 [2024-11-27 04:36:06.078453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.412 04:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.412 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.671 04:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.671 "name": "raid_bdev1", 00:14:19.671 "uuid": "29d604c2-655e-435b-ab37-9ea91052bd29", 00:14:19.671 "strip_size_kb": 64, 00:14:19.671 "state": "online", 00:14:19.671 "raid_level": "concat", 00:14:19.671 "superblock": true, 00:14:19.671 "num_base_bdevs": 3, 00:14:19.671 "num_base_bdevs_discovered": 3, 00:14:19.671 "num_base_bdevs_operational": 3, 00:14:19.671 "base_bdevs_list": [ 00:14:19.671 { 00:14:19.671 "name": "BaseBdev1", 00:14:19.671 "uuid": "bc78d5b4-6b7b-5e60-888c-6beba19b812f", 00:14:19.671 "is_configured": true, 00:14:19.671 "data_offset": 2048, 00:14:19.671 "data_size": 63488 00:14:19.671 }, 00:14:19.671 { 00:14:19.671 "name": "BaseBdev2", 00:14:19.671 "uuid": "9d4a11e5-1529-56b6-a1cb-648cd19edfff", 00:14:19.671 "is_configured": true, 00:14:19.671 "data_offset": 2048, 00:14:19.671 "data_size": 63488 00:14:19.671 }, 00:14:19.671 { 00:14:19.671 "name": "BaseBdev3", 00:14:19.671 "uuid": "adeaa9b5-7668-5b15-b844-13b6327aa0b3", 00:14:19.671 "is_configured": true, 00:14:19.671 "data_offset": 2048, 00:14:19.671 "data_size": 63488 00:14:19.671 } 00:14:19.671 ] 00:14:19.671 }' 00:14:19.671 04:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.671 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.930 04:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:19.930 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.930 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.930 [2024-11-27 04:36:07.526264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.930 [2024-11-27 04:36:07.526300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.930 [2024-11-27 04:36:07.529704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.930 [2024-11-27 04:36:07.529763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.930 [2024-11-27 04:36:07.529847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.930 [2024-11-27 04:36:07.529870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:19.930 { 00:14:19.930 "results": [ 00:14:19.930 { 00:14:19.930 "job": "raid_bdev1", 00:14:19.930 "core_mask": "0x1", 00:14:19.930 "workload": "randrw", 00:14:19.930 "percentage": 50, 00:14:19.930 "status": "finished", 00:14:19.930 "queue_depth": 1, 00:14:19.930 "io_size": 131072, 00:14:19.930 "runtime": 1.445192, 00:14:19.930 "iops": 10426.988247928302, 00:14:19.930 "mibps": 1303.3735309910378, 00:14:19.930 "io_failed": 1, 00:14:19.930 "io_timeout": 0, 00:14:19.930 "avg_latency_us": 133.47807106231525, 00:14:19.930 "min_latency_us": 44.21818181818182, 00:14:19.930 "max_latency_us": 1824.581818181818 00:14:19.930 } 00:14:19.930 ], 00:14:19.930 "core_count": 1 00:14:19.930 } 00:14:19.930 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.930 04:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67283 00:14:19.930 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67283 ']' 00:14:19.930 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67283 00:14:19.930 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:19.930 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.930 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67283 00:14:20.188 killing process with pid 67283 00:14:20.188 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.188 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.188 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67283' 00:14:20.188 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67283 00:14:20.188 [2024-11-27 04:36:07.562346] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:20.188 04:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67283 00:14:20.188 [2024-11-27 04:36:07.769759] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.560 04:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:21.560 04:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MLfHNL7LW6 00:14:21.560 04:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:21.560 04:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:14:21.560 04:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:21.560 04:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:21.560 04:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:21.560 04:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:14:21.560 00:14:21.560 real 0m4.698s 00:14:21.560 user 0m5.837s 00:14:21.561 sys 0m0.549s 00:14:21.561 04:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.561 ************************************ 00:14:21.561 END TEST raid_read_error_test 00:14:21.561 ************************************ 00:14:21.561 04:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.561 04:36:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:14:21.561 04:36:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:21.561 04:36:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.561 04:36:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.561 ************************************ 00:14:21.561 START TEST raid_write_error_test 00:14:21.561 ************************************ 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ikdIMMi8A0 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67423 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67423 00:14:21.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67423 ']' 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.561 04:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.561 [2024-11-27 04:36:09.053818] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:21.561 [2024-11-27 04:36:09.054248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67423 ] 00:14:21.819 [2024-11-27 04:36:09.233507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.819 [2024-11-27 04:36:09.365192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.076 [2024-11-27 04:36:09.572527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.076 [2024-11-27 04:36:09.572755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.641 04:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.641 04:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:22.641 04:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:22.641 04:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:22.641 04:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 BaseBdev1_malloc 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 true 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 [2024-11-27 04:36:10.027071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:22.641 [2024-11-27 04:36:10.027301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.641 [2024-11-27 04:36:10.027490] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:22.641 [2024-11-27 04:36:10.027653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.641 [2024-11-27 04:36:10.030554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.641 [2024-11-27 04:36:10.030729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:22.641 BaseBdev1 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 BaseBdev2_malloc 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 true 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 [2024-11-27 04:36:10.087712] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:22.641 [2024-11-27 04:36:10.087796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.641 [2024-11-27 04:36:10.087826] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:22.641 [2024-11-27 04:36:10.087846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.641 [2024-11-27 04:36:10.090973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.641 [2024-11-27 04:36:10.091026] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:22.641 BaseBdev2 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 BaseBdev3_malloc 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 true 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 [2024-11-27 04:36:10.157675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:22.641 [2024-11-27 04:36:10.157748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.641 [2024-11-27 04:36:10.157805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:22.641 [2024-11-27 04:36:10.157830] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.641 [2024-11-27 04:36:10.160661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.641 [2024-11-27 04:36:10.160714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:22.641 BaseBdev3 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 [2024-11-27 04:36:10.165806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.641 [2024-11-27 04:36:10.168416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.641 [2024-11-27 04:36:10.168654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.641 [2024-11-27 04:36:10.169098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:22.641 [2024-11-27 04:36:10.169237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:22.641 [2024-11-27 04:36:10.169611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:22.641 [2024-11-27 04:36:10.169995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:22.641 [2024-11-27 04:36:10.170139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:22.641 [2024-11-27 04:36:10.170511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.641 "name": "raid_bdev1", 00:14:22.641 "uuid": "b00cc73e-a96b-43e2-9cf9-39c6a36e029b", 00:14:22.641 "strip_size_kb": 64, 00:14:22.641 "state": "online", 00:14:22.641 "raid_level": "concat", 00:14:22.641 "superblock": true, 00:14:22.641 "num_base_bdevs": 3, 00:14:22.641 "num_base_bdevs_discovered": 3, 00:14:22.641 "num_base_bdevs_operational": 3, 00:14:22.641 "base_bdevs_list": [ 00:14:22.641 { 00:14:22.641 "name": "BaseBdev1", 00:14:22.641 "uuid": "56348ec5-18f6-56fb-9b5a-f814e04af9c1", 00:14:22.641 "is_configured": true, 00:14:22.641 "data_offset": 2048, 00:14:22.641 "data_size": 63488 00:14:22.641 }, 00:14:22.641 { 00:14:22.641 "name": "BaseBdev2", 00:14:22.641 "uuid": "5ea33730-0dcc-5f4c-a994-4e46804c9ef8", 00:14:22.641 "is_configured": true, 00:14:22.641 "data_offset": 2048, 00:14:22.641 "data_size": 63488 00:14:22.641 }, 00:14:22.641 { 00:14:22.641 "name": "BaseBdev3", 00:14:22.641 "uuid": "e7d406d9-0f5e-51a0-869e-57901a2359d6", 00:14:22.641 "is_configured": true, 00:14:22.641 "data_offset": 2048, 00:14:22.641 "data_size": 63488 00:14:22.641 } 00:14:22.641 ] 00:14:22.641 }' 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.641 04:36:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.205 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:23.205 04:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:23.462 [2024-11-27 04:36:10.856048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.393 "name": "raid_bdev1", 00:14:24.393 "uuid": "b00cc73e-a96b-43e2-9cf9-39c6a36e029b", 00:14:24.393 "strip_size_kb": 64, 00:14:24.393 "state": "online", 00:14:24.393 "raid_level": "concat", 00:14:24.393 "superblock": true, 00:14:24.393 "num_base_bdevs": 3, 00:14:24.393 "num_base_bdevs_discovered": 3, 00:14:24.393 "num_base_bdevs_operational": 3, 00:14:24.393 "base_bdevs_list": [ 00:14:24.393 { 00:14:24.393 "name": "BaseBdev1", 00:14:24.393 "uuid": "56348ec5-18f6-56fb-9b5a-f814e04af9c1", 00:14:24.393 "is_configured": true, 00:14:24.393 "data_offset": 2048, 00:14:24.393 "data_size": 63488 00:14:24.393 }, 00:14:24.393 { 00:14:24.393 "name": "BaseBdev2", 00:14:24.393 "uuid": "5ea33730-0dcc-5f4c-a994-4e46804c9ef8", 00:14:24.393 "is_configured": true, 00:14:24.393 "data_offset": 2048, 00:14:24.393 "data_size": 63488 00:14:24.393 }, 00:14:24.393 { 00:14:24.393 "name": "BaseBdev3", 00:14:24.393 "uuid": "e7d406d9-0f5e-51a0-869e-57901a2359d6", 00:14:24.393 "is_configured": true, 00:14:24.393 "data_offset": 2048, 00:14:24.393 "data_size": 63488 00:14:24.393 } 00:14:24.393 ] 00:14:24.393 }' 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.393 04:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.650 [2024-11-27 04:36:12.219264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.650 [2024-11-27 04:36:12.219436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.650 [2024-11-27 04:36:12.222910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.650 [2024-11-27 04:36:12.222969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.650 [2024-11-27 04:36:12.223024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.650 [2024-11-27 04:36:12.223039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:24.650 { 00:14:24.650 "results": [ 00:14:24.650 { 00:14:24.650 "job": "raid_bdev1", 00:14:24.650 "core_mask": "0x1", 00:14:24.650 "workload": "randrw", 00:14:24.650 "percentage": 50, 00:14:24.650 "status": "finished", 00:14:24.650 "queue_depth": 1, 00:14:24.650 "io_size": 131072, 00:14:24.650 "runtime": 1.361146, 00:14:24.650 "iops": 10055.49735296581, 00:14:24.650 "mibps": 1256.9371691207261, 00:14:24.650 "io_failed": 1, 00:14:24.650 "io_timeout": 0, 00:14:24.650 "avg_latency_us": 138.38055788746613, 00:14:24.650 "min_latency_us": 43.52, 00:14:24.650 "max_latency_us": 1832.0290909090909 00:14:24.650 } 00:14:24.650 ], 00:14:24.650 "core_count": 1 00:14:24.650 } 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67423 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67423 ']' 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67423 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67423 00:14:24.650 killing process with pid 67423 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67423' 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67423 00:14:24.650 [2024-11-27 04:36:12.256734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.650 04:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67423 00:14:24.907 [2024-11-27 04:36:12.464998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.279 04:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ikdIMMi8A0 00:14:26.279 04:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:26.279 04:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:26.279 ************************************ 00:14:26.279 END TEST raid_write_error_test 00:14:26.279 ************************************ 00:14:26.279 04:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:14:26.279 04:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:26.279 04:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:26.279 04:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:26.279 04:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:14:26.279 00:14:26.279 real 0m4.645s 00:14:26.279 user 0m5.691s 00:14:26.279 sys 0m0.584s 00:14:26.279 04:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.279 04:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.279 04:36:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:26.279 04:36:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:14:26.279 04:36:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:26.279 04:36:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.279 04:36:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.279 ************************************ 00:14:26.279 START TEST raid_state_function_test 00:14:26.279 ************************************ 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:26.279 Process raid pid: 67567 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67567 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67567' 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:26.279 04:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67567 00:14:26.280 04:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67567 ']' 00:14:26.280 04:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.280 04:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.280 04:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.280 04:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.280 04:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.280 [2024-11-27 04:36:13.718573] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:26.280 [2024-11-27 04:36:13.718929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.538 [2024-11-27 04:36:13.909219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.538 [2024-11-27 04:36:14.082856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.796 [2024-11-27 04:36:14.292242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.796 [2024-11-27 04:36:14.292471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.362 [2024-11-27 04:36:14.773422] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:27.362 [2024-11-27 04:36:14.773487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:27.362 [2024-11-27 04:36:14.773504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.362 [2024-11-27 04:36:14.773521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.362 [2024-11-27 04:36:14.773532] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.362 [2024-11-27 04:36:14.773547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.362 "name": "Existed_Raid", 00:14:27.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.362 "strip_size_kb": 0, 00:14:27.362 "state": "configuring", 00:14:27.362 "raid_level": "raid1", 00:14:27.362 "superblock": false, 00:14:27.362 "num_base_bdevs": 3, 00:14:27.362 "num_base_bdevs_discovered": 0, 00:14:27.362 "num_base_bdevs_operational": 3, 00:14:27.362 "base_bdevs_list": [ 00:14:27.362 { 00:14:27.362 "name": "BaseBdev1", 00:14:27.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.362 "is_configured": false, 00:14:27.362 "data_offset": 0, 00:14:27.362 "data_size": 0 00:14:27.362 }, 00:14:27.362 { 00:14:27.362 "name": "BaseBdev2", 00:14:27.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.362 "is_configured": false, 00:14:27.362 "data_offset": 0, 00:14:27.362 "data_size": 0 00:14:27.362 }, 00:14:27.362 { 00:14:27.362 "name": "BaseBdev3", 00:14:27.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.362 "is_configured": false, 00:14:27.362 "data_offset": 0, 00:14:27.362 "data_size": 0 00:14:27.362 } 00:14:27.362 ] 00:14:27.362 }' 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.362 04:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.928 [2024-11-27 04:36:15.265535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.928 [2024-11-27 04:36:15.265743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.928 [2024-11-27 04:36:15.277488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:27.928 [2024-11-27 04:36:15.277666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:27.928 [2024-11-27 04:36:15.277813] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.928 [2024-11-27 04:36:15.277950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.928 [2024-11-27 04:36:15.278061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.928 [2024-11-27 04:36:15.278206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.928 [2024-11-27 04:36:15.326603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.928 BaseBdev1 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.928 [ 00:14:27.928 { 00:14:27.928 "name": "BaseBdev1", 00:14:27.928 "aliases": [ 00:14:27.928 "409b76e5-8a87-4585-bf88-b427617df159" 00:14:27.928 ], 00:14:27.928 "product_name": "Malloc disk", 00:14:27.928 "block_size": 512, 00:14:27.928 "num_blocks": 65536, 00:14:27.928 "uuid": "409b76e5-8a87-4585-bf88-b427617df159", 00:14:27.928 "assigned_rate_limits": { 00:14:27.928 "rw_ios_per_sec": 0, 00:14:27.928 "rw_mbytes_per_sec": 0, 00:14:27.928 "r_mbytes_per_sec": 0, 00:14:27.928 "w_mbytes_per_sec": 0 00:14:27.928 }, 00:14:27.928 "claimed": true, 00:14:27.928 "claim_type": "exclusive_write", 00:14:27.928 "zoned": false, 00:14:27.928 "supported_io_types": { 00:14:27.928 "read": true, 00:14:27.928 "write": true, 00:14:27.928 "unmap": true, 00:14:27.928 "flush": true, 00:14:27.928 "reset": true, 00:14:27.928 "nvme_admin": false, 00:14:27.928 "nvme_io": false, 00:14:27.928 "nvme_io_md": false, 00:14:27.928 "write_zeroes": true, 00:14:27.928 "zcopy": true, 00:14:27.928 "get_zone_info": false, 00:14:27.928 "zone_management": false, 00:14:27.928 "zone_append": false, 00:14:27.928 "compare": false, 00:14:27.928 "compare_and_write": false, 00:14:27.928 "abort": true, 00:14:27.928 "seek_hole": false, 00:14:27.928 "seek_data": false, 00:14:27.928 "copy": true, 00:14:27.928 "nvme_iov_md": false 00:14:27.928 }, 00:14:27.928 "memory_domains": [ 00:14:27.928 { 00:14:27.928 "dma_device_id": "system", 00:14:27.928 "dma_device_type": 1 00:14:27.928 }, 00:14:27.928 { 00:14:27.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.928 "dma_device_type": 2 00:14:27.928 } 00:14:27.928 ], 00:14:27.928 "driver_specific": {} 00:14:27.928 } 00:14:27.928 ] 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.928 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.929 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.929 "name": "Existed_Raid", 00:14:27.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.929 "strip_size_kb": 0, 00:14:27.929 "state": "configuring", 00:14:27.929 "raid_level": "raid1", 00:14:27.929 "superblock": false, 00:14:27.929 "num_base_bdevs": 3, 00:14:27.929 "num_base_bdevs_discovered": 1, 00:14:27.929 "num_base_bdevs_operational": 3, 00:14:27.929 "base_bdevs_list": [ 00:14:27.929 { 00:14:27.929 "name": "BaseBdev1", 00:14:27.929 "uuid": "409b76e5-8a87-4585-bf88-b427617df159", 00:14:27.929 "is_configured": true, 00:14:27.929 "data_offset": 0, 00:14:27.929 "data_size": 65536 00:14:27.929 }, 00:14:27.929 { 00:14:27.929 "name": "BaseBdev2", 00:14:27.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.929 "is_configured": false, 00:14:27.929 "data_offset": 0, 00:14:27.929 "data_size": 0 00:14:27.929 }, 00:14:27.929 { 00:14:27.929 "name": "BaseBdev3", 00:14:27.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.929 "is_configured": false, 00:14:27.929 "data_offset": 0, 00:14:27.929 "data_size": 0 00:14:27.929 } 00:14:27.929 ] 00:14:27.929 }' 00:14:27.929 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.929 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.496 [2024-11-27 04:36:15.898828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.496 [2024-11-27 04:36:15.899058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.496 [2024-11-27 04:36:15.906856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.496 [2024-11-27 04:36:15.909295] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:28.496 [2024-11-27 04:36:15.909360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:28.496 [2024-11-27 04:36:15.909377] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:28.496 [2024-11-27 04:36:15.909393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.496 "name": "Existed_Raid", 00:14:28.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.496 "strip_size_kb": 0, 00:14:28.496 "state": "configuring", 00:14:28.496 "raid_level": "raid1", 00:14:28.496 "superblock": false, 00:14:28.496 "num_base_bdevs": 3, 00:14:28.496 "num_base_bdevs_discovered": 1, 00:14:28.496 "num_base_bdevs_operational": 3, 00:14:28.496 "base_bdevs_list": [ 00:14:28.496 { 00:14:28.496 "name": "BaseBdev1", 00:14:28.496 "uuid": "409b76e5-8a87-4585-bf88-b427617df159", 00:14:28.496 "is_configured": true, 00:14:28.496 "data_offset": 0, 00:14:28.496 "data_size": 65536 00:14:28.496 }, 00:14:28.496 { 00:14:28.496 "name": "BaseBdev2", 00:14:28.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.496 "is_configured": false, 00:14:28.496 "data_offset": 0, 00:14:28.496 "data_size": 0 00:14:28.496 }, 00:14:28.496 { 00:14:28.496 "name": "BaseBdev3", 00:14:28.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.496 "is_configured": false, 00:14:28.496 "data_offset": 0, 00:14:28.496 "data_size": 0 00:14:28.496 } 00:14:28.496 ] 00:14:28.496 }' 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.496 04:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.063 [2024-11-27 04:36:16.469415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.063 BaseBdev2 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.063 [ 00:14:29.063 { 00:14:29.063 "name": "BaseBdev2", 00:14:29.063 "aliases": [ 00:14:29.063 "93753cad-e68b-4233-9ccf-fac9b0270d5b" 00:14:29.063 ], 00:14:29.063 "product_name": "Malloc disk", 00:14:29.063 "block_size": 512, 00:14:29.063 "num_blocks": 65536, 00:14:29.063 "uuid": "93753cad-e68b-4233-9ccf-fac9b0270d5b", 00:14:29.063 "assigned_rate_limits": { 00:14:29.063 "rw_ios_per_sec": 0, 00:14:29.063 "rw_mbytes_per_sec": 0, 00:14:29.063 "r_mbytes_per_sec": 0, 00:14:29.063 "w_mbytes_per_sec": 0 00:14:29.063 }, 00:14:29.063 "claimed": true, 00:14:29.063 "claim_type": "exclusive_write", 00:14:29.063 "zoned": false, 00:14:29.063 "supported_io_types": { 00:14:29.063 "read": true, 00:14:29.063 "write": true, 00:14:29.063 "unmap": true, 00:14:29.063 "flush": true, 00:14:29.063 "reset": true, 00:14:29.063 "nvme_admin": false, 00:14:29.063 "nvme_io": false, 00:14:29.063 "nvme_io_md": false, 00:14:29.063 "write_zeroes": true, 00:14:29.063 "zcopy": true, 00:14:29.063 "get_zone_info": false, 00:14:29.063 "zone_management": false, 00:14:29.063 "zone_append": false, 00:14:29.063 "compare": false, 00:14:29.063 "compare_and_write": false, 00:14:29.063 "abort": true, 00:14:29.063 "seek_hole": false, 00:14:29.063 "seek_data": false, 00:14:29.063 "copy": true, 00:14:29.063 "nvme_iov_md": false 00:14:29.063 }, 00:14:29.063 "memory_domains": [ 00:14:29.063 { 00:14:29.063 "dma_device_id": "system", 00:14:29.063 "dma_device_type": 1 00:14:29.063 }, 00:14:29.063 { 00:14:29.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.063 "dma_device_type": 2 00:14:29.063 } 00:14:29.063 ], 00:14:29.063 "driver_specific": {} 00:14:29.063 } 00:14:29.063 ] 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.063 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.064 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.064 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.064 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.064 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.064 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.064 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.064 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.064 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.064 "name": "Existed_Raid", 00:14:29.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.064 "strip_size_kb": 0, 00:14:29.064 "state": "configuring", 00:14:29.064 "raid_level": "raid1", 00:14:29.064 "superblock": false, 00:14:29.064 "num_base_bdevs": 3, 00:14:29.064 "num_base_bdevs_discovered": 2, 00:14:29.064 "num_base_bdevs_operational": 3, 00:14:29.064 "base_bdevs_list": [ 00:14:29.064 { 00:14:29.064 "name": "BaseBdev1", 00:14:29.064 "uuid": "409b76e5-8a87-4585-bf88-b427617df159", 00:14:29.064 "is_configured": true, 00:14:29.064 "data_offset": 0, 00:14:29.064 "data_size": 65536 00:14:29.064 }, 00:14:29.064 { 00:14:29.064 "name": "BaseBdev2", 00:14:29.064 "uuid": "93753cad-e68b-4233-9ccf-fac9b0270d5b", 00:14:29.064 "is_configured": true, 00:14:29.064 "data_offset": 0, 00:14:29.064 "data_size": 65536 00:14:29.064 }, 00:14:29.064 { 00:14:29.064 "name": "BaseBdev3", 00:14:29.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.064 "is_configured": false, 00:14:29.064 "data_offset": 0, 00:14:29.064 "data_size": 0 00:14:29.064 } 00:14:29.064 ] 00:14:29.064 }' 00:14:29.064 04:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.064 04:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.629 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:29.629 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.629 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.629 [2024-11-27 04:36:17.050703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.629 [2024-11-27 04:36:17.050768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:29.629 [2024-11-27 04:36:17.050824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:29.630 [2024-11-27 04:36:17.051184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:29.630 [2024-11-27 04:36:17.051432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:29.630 [2024-11-27 04:36:17.051455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:29.630 [2024-11-27 04:36:17.051788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.630 BaseBdev3 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.630 [ 00:14:29.630 { 00:14:29.630 "name": "BaseBdev3", 00:14:29.630 "aliases": [ 00:14:29.630 "1d048790-0b56-4bfb-b33d-02db02231ed4" 00:14:29.630 ], 00:14:29.630 "product_name": "Malloc disk", 00:14:29.630 "block_size": 512, 00:14:29.630 "num_blocks": 65536, 00:14:29.630 "uuid": "1d048790-0b56-4bfb-b33d-02db02231ed4", 00:14:29.630 "assigned_rate_limits": { 00:14:29.630 "rw_ios_per_sec": 0, 00:14:29.630 "rw_mbytes_per_sec": 0, 00:14:29.630 "r_mbytes_per_sec": 0, 00:14:29.630 "w_mbytes_per_sec": 0 00:14:29.630 }, 00:14:29.630 "claimed": true, 00:14:29.630 "claim_type": "exclusive_write", 00:14:29.630 "zoned": false, 00:14:29.630 "supported_io_types": { 00:14:29.630 "read": true, 00:14:29.630 "write": true, 00:14:29.630 "unmap": true, 00:14:29.630 "flush": true, 00:14:29.630 "reset": true, 00:14:29.630 "nvme_admin": false, 00:14:29.630 "nvme_io": false, 00:14:29.630 "nvme_io_md": false, 00:14:29.630 "write_zeroes": true, 00:14:29.630 "zcopy": true, 00:14:29.630 "get_zone_info": false, 00:14:29.630 "zone_management": false, 00:14:29.630 "zone_append": false, 00:14:29.630 "compare": false, 00:14:29.630 "compare_and_write": false, 00:14:29.630 "abort": true, 00:14:29.630 "seek_hole": false, 00:14:29.630 "seek_data": false, 00:14:29.630 "copy": true, 00:14:29.630 "nvme_iov_md": false 00:14:29.630 }, 00:14:29.630 "memory_domains": [ 00:14:29.630 { 00:14:29.630 "dma_device_id": "system", 00:14:29.630 "dma_device_type": 1 00:14:29.630 }, 00:14:29.630 { 00:14:29.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.630 "dma_device_type": 2 00:14:29.630 } 00:14:29.630 ], 00:14:29.630 "driver_specific": {} 00:14:29.630 } 00:14:29.630 ] 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.630 "name": "Existed_Raid", 00:14:29.630 "uuid": "b4dcf379-fb8d-4628-8e25-31cbe1b89169", 00:14:29.630 "strip_size_kb": 0, 00:14:29.630 "state": "online", 00:14:29.630 "raid_level": "raid1", 00:14:29.630 "superblock": false, 00:14:29.630 "num_base_bdevs": 3, 00:14:29.630 "num_base_bdevs_discovered": 3, 00:14:29.630 "num_base_bdevs_operational": 3, 00:14:29.630 "base_bdevs_list": [ 00:14:29.630 { 00:14:29.630 "name": "BaseBdev1", 00:14:29.630 "uuid": "409b76e5-8a87-4585-bf88-b427617df159", 00:14:29.630 "is_configured": true, 00:14:29.630 "data_offset": 0, 00:14:29.630 "data_size": 65536 00:14:29.630 }, 00:14:29.630 { 00:14:29.630 "name": "BaseBdev2", 00:14:29.630 "uuid": "93753cad-e68b-4233-9ccf-fac9b0270d5b", 00:14:29.630 "is_configured": true, 00:14:29.630 "data_offset": 0, 00:14:29.630 "data_size": 65536 00:14:29.630 }, 00:14:29.630 { 00:14:29.630 "name": "BaseBdev3", 00:14:29.630 "uuid": "1d048790-0b56-4bfb-b33d-02db02231ed4", 00:14:29.630 "is_configured": true, 00:14:29.630 "data_offset": 0, 00:14:29.630 "data_size": 65536 00:14:29.630 } 00:14:29.630 ] 00:14:29.630 }' 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.630 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.206 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:30.206 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:30.206 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:30.206 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:30.206 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:30.206 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:30.206 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:30.206 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:30.207 [2024-11-27 04:36:17.579288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:30.207 "name": "Existed_Raid", 00:14:30.207 "aliases": [ 00:14:30.207 "b4dcf379-fb8d-4628-8e25-31cbe1b89169" 00:14:30.207 ], 00:14:30.207 "product_name": "Raid Volume", 00:14:30.207 "block_size": 512, 00:14:30.207 "num_blocks": 65536, 00:14:30.207 "uuid": "b4dcf379-fb8d-4628-8e25-31cbe1b89169", 00:14:30.207 "assigned_rate_limits": { 00:14:30.207 "rw_ios_per_sec": 0, 00:14:30.207 "rw_mbytes_per_sec": 0, 00:14:30.207 "r_mbytes_per_sec": 0, 00:14:30.207 "w_mbytes_per_sec": 0 00:14:30.207 }, 00:14:30.207 "claimed": false, 00:14:30.207 "zoned": false, 00:14:30.207 "supported_io_types": { 00:14:30.207 "read": true, 00:14:30.207 "write": true, 00:14:30.207 "unmap": false, 00:14:30.207 "flush": false, 00:14:30.207 "reset": true, 00:14:30.207 "nvme_admin": false, 00:14:30.207 "nvme_io": false, 00:14:30.207 "nvme_io_md": false, 00:14:30.207 "write_zeroes": true, 00:14:30.207 "zcopy": false, 00:14:30.207 "get_zone_info": false, 00:14:30.207 "zone_management": false, 00:14:30.207 "zone_append": false, 00:14:30.207 "compare": false, 00:14:30.207 "compare_and_write": false, 00:14:30.207 "abort": false, 00:14:30.207 "seek_hole": false, 00:14:30.207 "seek_data": false, 00:14:30.207 "copy": false, 00:14:30.207 "nvme_iov_md": false 00:14:30.207 }, 00:14:30.207 "memory_domains": [ 00:14:30.207 { 00:14:30.207 "dma_device_id": "system", 00:14:30.207 "dma_device_type": 1 00:14:30.207 }, 00:14:30.207 { 00:14:30.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.207 "dma_device_type": 2 00:14:30.207 }, 00:14:30.207 { 00:14:30.207 "dma_device_id": "system", 00:14:30.207 "dma_device_type": 1 00:14:30.207 }, 00:14:30.207 { 00:14:30.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.207 "dma_device_type": 2 00:14:30.207 }, 00:14:30.207 { 00:14:30.207 "dma_device_id": "system", 00:14:30.207 "dma_device_type": 1 00:14:30.207 }, 00:14:30.207 { 00:14:30.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.207 "dma_device_type": 2 00:14:30.207 } 00:14:30.207 ], 00:14:30.207 "driver_specific": { 00:14:30.207 "raid": { 00:14:30.207 "uuid": "b4dcf379-fb8d-4628-8e25-31cbe1b89169", 00:14:30.207 "strip_size_kb": 0, 00:14:30.207 "state": "online", 00:14:30.207 "raid_level": "raid1", 00:14:30.207 "superblock": false, 00:14:30.207 "num_base_bdevs": 3, 00:14:30.207 "num_base_bdevs_discovered": 3, 00:14:30.207 "num_base_bdevs_operational": 3, 00:14:30.207 "base_bdevs_list": [ 00:14:30.207 { 00:14:30.207 "name": "BaseBdev1", 00:14:30.207 "uuid": "409b76e5-8a87-4585-bf88-b427617df159", 00:14:30.207 "is_configured": true, 00:14:30.207 "data_offset": 0, 00:14:30.207 "data_size": 65536 00:14:30.207 }, 00:14:30.207 { 00:14:30.207 "name": "BaseBdev2", 00:14:30.207 "uuid": "93753cad-e68b-4233-9ccf-fac9b0270d5b", 00:14:30.207 "is_configured": true, 00:14:30.207 "data_offset": 0, 00:14:30.207 "data_size": 65536 00:14:30.207 }, 00:14:30.207 { 00:14:30.207 "name": "BaseBdev3", 00:14:30.207 "uuid": "1d048790-0b56-4bfb-b33d-02db02231ed4", 00:14:30.207 "is_configured": true, 00:14:30.207 "data_offset": 0, 00:14:30.207 "data_size": 65536 00:14:30.207 } 00:14:30.207 ] 00:14:30.207 } 00:14:30.207 } 00:14:30.207 }' 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:30.207 BaseBdev2 00:14:30.207 BaseBdev3' 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.207 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.465 [2024-11-27 04:36:17.902995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.465 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.466 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.466 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.466 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.466 04:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.466 04:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.466 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.466 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.466 "name": "Existed_Raid", 00:14:30.466 "uuid": "b4dcf379-fb8d-4628-8e25-31cbe1b89169", 00:14:30.466 "strip_size_kb": 0, 00:14:30.466 "state": "online", 00:14:30.466 "raid_level": "raid1", 00:14:30.466 "superblock": false, 00:14:30.466 "num_base_bdevs": 3, 00:14:30.466 "num_base_bdevs_discovered": 2, 00:14:30.466 "num_base_bdevs_operational": 2, 00:14:30.466 "base_bdevs_list": [ 00:14:30.466 { 00:14:30.466 "name": null, 00:14:30.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.466 "is_configured": false, 00:14:30.466 "data_offset": 0, 00:14:30.466 "data_size": 65536 00:14:30.466 }, 00:14:30.466 { 00:14:30.466 "name": "BaseBdev2", 00:14:30.466 "uuid": "93753cad-e68b-4233-9ccf-fac9b0270d5b", 00:14:30.466 "is_configured": true, 00:14:30.466 "data_offset": 0, 00:14:30.466 "data_size": 65536 00:14:30.466 }, 00:14:30.466 { 00:14:30.466 "name": "BaseBdev3", 00:14:30.466 "uuid": "1d048790-0b56-4bfb-b33d-02db02231ed4", 00:14:30.466 "is_configured": true, 00:14:30.466 "data_offset": 0, 00:14:30.466 "data_size": 65536 00:14:30.466 } 00:14:30.466 ] 00:14:30.466 }' 00:14:30.466 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.466 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.032 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.033 [2024-11-27 04:36:18.567058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:31.033 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.291 [2024-11-27 04:36:18.701718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:31.291 [2024-11-27 04:36:18.701865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.291 [2024-11-27 04:36:18.784230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.291 [2024-11-27 04:36:18.784300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.291 [2024-11-27 04:36:18.784321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:31.291 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.292 BaseBdev2 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.292 [ 00:14:31.292 { 00:14:31.292 "name": "BaseBdev2", 00:14:31.292 "aliases": [ 00:14:31.292 "e82f6825-908a-40ca-b062-21a01c1cccac" 00:14:31.292 ], 00:14:31.292 "product_name": "Malloc disk", 00:14:31.292 "block_size": 512, 00:14:31.292 "num_blocks": 65536, 00:14:31.292 "uuid": "e82f6825-908a-40ca-b062-21a01c1cccac", 00:14:31.292 "assigned_rate_limits": { 00:14:31.292 "rw_ios_per_sec": 0, 00:14:31.292 "rw_mbytes_per_sec": 0, 00:14:31.292 "r_mbytes_per_sec": 0, 00:14:31.292 "w_mbytes_per_sec": 0 00:14:31.292 }, 00:14:31.292 "claimed": false, 00:14:31.292 "zoned": false, 00:14:31.292 "supported_io_types": { 00:14:31.292 "read": true, 00:14:31.292 "write": true, 00:14:31.292 "unmap": true, 00:14:31.292 "flush": true, 00:14:31.292 "reset": true, 00:14:31.292 "nvme_admin": false, 00:14:31.292 "nvme_io": false, 00:14:31.292 "nvme_io_md": false, 00:14:31.292 "write_zeroes": true, 00:14:31.292 "zcopy": true, 00:14:31.292 "get_zone_info": false, 00:14:31.292 "zone_management": false, 00:14:31.292 "zone_append": false, 00:14:31.292 "compare": false, 00:14:31.292 "compare_and_write": false, 00:14:31.292 "abort": true, 00:14:31.292 "seek_hole": false, 00:14:31.292 "seek_data": false, 00:14:31.292 "copy": true, 00:14:31.292 "nvme_iov_md": false 00:14:31.292 }, 00:14:31.292 "memory_domains": [ 00:14:31.292 { 00:14:31.292 "dma_device_id": "system", 00:14:31.292 "dma_device_type": 1 00:14:31.292 }, 00:14:31.292 { 00:14:31.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.292 "dma_device_type": 2 00:14:31.292 } 00:14:31.292 ], 00:14:31.292 "driver_specific": {} 00:14:31.292 } 00:14:31.292 ] 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.292 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.551 BaseBdev3 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.551 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.552 [ 00:14:31.552 { 00:14:31.552 "name": "BaseBdev3", 00:14:31.552 "aliases": [ 00:14:31.552 "88e38c8b-bd83-427d-9117-0c668f1bf45f" 00:14:31.552 ], 00:14:31.552 "product_name": "Malloc disk", 00:14:31.552 "block_size": 512, 00:14:31.552 "num_blocks": 65536, 00:14:31.552 "uuid": "88e38c8b-bd83-427d-9117-0c668f1bf45f", 00:14:31.552 "assigned_rate_limits": { 00:14:31.552 "rw_ios_per_sec": 0, 00:14:31.552 "rw_mbytes_per_sec": 0, 00:14:31.552 "r_mbytes_per_sec": 0, 00:14:31.552 "w_mbytes_per_sec": 0 00:14:31.552 }, 00:14:31.552 "claimed": false, 00:14:31.552 "zoned": false, 00:14:31.552 "supported_io_types": { 00:14:31.552 "read": true, 00:14:31.552 "write": true, 00:14:31.552 "unmap": true, 00:14:31.552 "flush": true, 00:14:31.552 "reset": true, 00:14:31.552 "nvme_admin": false, 00:14:31.552 "nvme_io": false, 00:14:31.552 "nvme_io_md": false, 00:14:31.552 "write_zeroes": true, 00:14:31.552 "zcopy": true, 00:14:31.552 "get_zone_info": false, 00:14:31.552 "zone_management": false, 00:14:31.552 "zone_append": false, 00:14:31.552 "compare": false, 00:14:31.552 "compare_and_write": false, 00:14:31.552 "abort": true, 00:14:31.552 "seek_hole": false, 00:14:31.552 "seek_data": false, 00:14:31.552 "copy": true, 00:14:31.552 "nvme_iov_md": false 00:14:31.552 }, 00:14:31.552 "memory_domains": [ 00:14:31.552 { 00:14:31.552 "dma_device_id": "system", 00:14:31.552 "dma_device_type": 1 00:14:31.552 }, 00:14:31.552 { 00:14:31.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.552 "dma_device_type": 2 00:14:31.552 } 00:14:31.552 ], 00:14:31.552 "driver_specific": {} 00:14:31.552 } 00:14:31.552 ] 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.552 [2024-11-27 04:36:18.991697] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.552 [2024-11-27 04:36:18.991909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.552 [2024-11-27 04:36:18.992040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.552 [2024-11-27 04:36:18.994590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.552 04:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.552 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.552 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.552 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.552 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.552 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.552 "name": "Existed_Raid", 00:14:31.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.552 "strip_size_kb": 0, 00:14:31.552 "state": "configuring", 00:14:31.552 "raid_level": "raid1", 00:14:31.552 "superblock": false, 00:14:31.552 "num_base_bdevs": 3, 00:14:31.552 "num_base_bdevs_discovered": 2, 00:14:31.552 "num_base_bdevs_operational": 3, 00:14:31.552 "base_bdevs_list": [ 00:14:31.552 { 00:14:31.552 "name": "BaseBdev1", 00:14:31.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.552 "is_configured": false, 00:14:31.552 "data_offset": 0, 00:14:31.552 "data_size": 0 00:14:31.552 }, 00:14:31.552 { 00:14:31.552 "name": "BaseBdev2", 00:14:31.552 "uuid": "e82f6825-908a-40ca-b062-21a01c1cccac", 00:14:31.552 "is_configured": true, 00:14:31.552 "data_offset": 0, 00:14:31.552 "data_size": 65536 00:14:31.552 }, 00:14:31.552 { 00:14:31.552 "name": "BaseBdev3", 00:14:31.552 "uuid": "88e38c8b-bd83-427d-9117-0c668f1bf45f", 00:14:31.552 "is_configured": true, 00:14:31.552 "data_offset": 0, 00:14:31.552 "data_size": 65536 00:14:31.552 } 00:14:31.552 ] 00:14:31.552 }' 00:14:31.552 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.552 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.122 [2024-11-27 04:36:19.487879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.122 "name": "Existed_Raid", 00:14:32.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.122 "strip_size_kb": 0, 00:14:32.122 "state": "configuring", 00:14:32.122 "raid_level": "raid1", 00:14:32.122 "superblock": false, 00:14:32.122 "num_base_bdevs": 3, 00:14:32.122 "num_base_bdevs_discovered": 1, 00:14:32.122 "num_base_bdevs_operational": 3, 00:14:32.122 "base_bdevs_list": [ 00:14:32.122 { 00:14:32.122 "name": "BaseBdev1", 00:14:32.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.122 "is_configured": false, 00:14:32.122 "data_offset": 0, 00:14:32.122 "data_size": 0 00:14:32.122 }, 00:14:32.122 { 00:14:32.122 "name": null, 00:14:32.122 "uuid": "e82f6825-908a-40ca-b062-21a01c1cccac", 00:14:32.122 "is_configured": false, 00:14:32.122 "data_offset": 0, 00:14:32.122 "data_size": 65536 00:14:32.122 }, 00:14:32.122 { 00:14:32.122 "name": "BaseBdev3", 00:14:32.122 "uuid": "88e38c8b-bd83-427d-9117-0c668f1bf45f", 00:14:32.122 "is_configured": true, 00:14:32.122 "data_offset": 0, 00:14:32.122 "data_size": 65536 00:14:32.122 } 00:14:32.122 ] 00:14:32.122 }' 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.122 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.381 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:32.381 04:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.381 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.381 04:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.639 [2024-11-27 04:36:20.085716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.639 BaseBdev1 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.639 [ 00:14:32.639 { 00:14:32.639 "name": "BaseBdev1", 00:14:32.639 "aliases": [ 00:14:32.639 "88750360-10c6-4d5b-9ab6-67da4ffc50c5" 00:14:32.639 ], 00:14:32.639 "product_name": "Malloc disk", 00:14:32.639 "block_size": 512, 00:14:32.639 "num_blocks": 65536, 00:14:32.639 "uuid": "88750360-10c6-4d5b-9ab6-67da4ffc50c5", 00:14:32.639 "assigned_rate_limits": { 00:14:32.639 "rw_ios_per_sec": 0, 00:14:32.639 "rw_mbytes_per_sec": 0, 00:14:32.639 "r_mbytes_per_sec": 0, 00:14:32.639 "w_mbytes_per_sec": 0 00:14:32.639 }, 00:14:32.639 "claimed": true, 00:14:32.639 "claim_type": "exclusive_write", 00:14:32.639 "zoned": false, 00:14:32.639 "supported_io_types": { 00:14:32.639 "read": true, 00:14:32.639 "write": true, 00:14:32.639 "unmap": true, 00:14:32.639 "flush": true, 00:14:32.639 "reset": true, 00:14:32.639 "nvme_admin": false, 00:14:32.639 "nvme_io": false, 00:14:32.639 "nvme_io_md": false, 00:14:32.639 "write_zeroes": true, 00:14:32.639 "zcopy": true, 00:14:32.639 "get_zone_info": false, 00:14:32.639 "zone_management": false, 00:14:32.639 "zone_append": false, 00:14:32.639 "compare": false, 00:14:32.639 "compare_and_write": false, 00:14:32.639 "abort": true, 00:14:32.639 "seek_hole": false, 00:14:32.639 "seek_data": false, 00:14:32.639 "copy": true, 00:14:32.639 "nvme_iov_md": false 00:14:32.639 }, 00:14:32.639 "memory_domains": [ 00:14:32.639 { 00:14:32.639 "dma_device_id": "system", 00:14:32.639 "dma_device_type": 1 00:14:32.639 }, 00:14:32.639 { 00:14:32.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.639 "dma_device_type": 2 00:14:32.639 } 00:14:32.639 ], 00:14:32.639 "driver_specific": {} 00:14:32.639 } 00:14:32.639 ] 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.639 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.640 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.640 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.640 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.640 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.640 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.640 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.640 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.640 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.640 "name": "Existed_Raid", 00:14:32.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.640 "strip_size_kb": 0, 00:14:32.640 "state": "configuring", 00:14:32.640 "raid_level": "raid1", 00:14:32.640 "superblock": false, 00:14:32.640 "num_base_bdevs": 3, 00:14:32.640 "num_base_bdevs_discovered": 2, 00:14:32.640 "num_base_bdevs_operational": 3, 00:14:32.640 "base_bdevs_list": [ 00:14:32.640 { 00:14:32.640 "name": "BaseBdev1", 00:14:32.640 "uuid": "88750360-10c6-4d5b-9ab6-67da4ffc50c5", 00:14:32.640 "is_configured": true, 00:14:32.640 "data_offset": 0, 00:14:32.640 "data_size": 65536 00:14:32.640 }, 00:14:32.640 { 00:14:32.640 "name": null, 00:14:32.640 "uuid": "e82f6825-908a-40ca-b062-21a01c1cccac", 00:14:32.640 "is_configured": false, 00:14:32.640 "data_offset": 0, 00:14:32.640 "data_size": 65536 00:14:32.640 }, 00:14:32.640 { 00:14:32.640 "name": "BaseBdev3", 00:14:32.640 "uuid": "88e38c8b-bd83-427d-9117-0c668f1bf45f", 00:14:32.640 "is_configured": true, 00:14:32.640 "data_offset": 0, 00:14:32.640 "data_size": 65536 00:14:32.640 } 00:14:32.640 ] 00:14:32.640 }' 00:14:32.640 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.640 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.207 [2024-11-27 04:36:20.725935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.207 "name": "Existed_Raid", 00:14:33.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.207 "strip_size_kb": 0, 00:14:33.207 "state": "configuring", 00:14:33.207 "raid_level": "raid1", 00:14:33.207 "superblock": false, 00:14:33.207 "num_base_bdevs": 3, 00:14:33.207 "num_base_bdevs_discovered": 1, 00:14:33.207 "num_base_bdevs_operational": 3, 00:14:33.207 "base_bdevs_list": [ 00:14:33.207 { 00:14:33.207 "name": "BaseBdev1", 00:14:33.207 "uuid": "88750360-10c6-4d5b-9ab6-67da4ffc50c5", 00:14:33.207 "is_configured": true, 00:14:33.207 "data_offset": 0, 00:14:33.207 "data_size": 65536 00:14:33.207 }, 00:14:33.207 { 00:14:33.207 "name": null, 00:14:33.207 "uuid": "e82f6825-908a-40ca-b062-21a01c1cccac", 00:14:33.207 "is_configured": false, 00:14:33.207 "data_offset": 0, 00:14:33.207 "data_size": 65536 00:14:33.207 }, 00:14:33.207 { 00:14:33.207 "name": null, 00:14:33.207 "uuid": "88e38c8b-bd83-427d-9117-0c668f1bf45f", 00:14:33.207 "is_configured": false, 00:14:33.207 "data_offset": 0, 00:14:33.207 "data_size": 65536 00:14:33.207 } 00:14:33.207 ] 00:14:33.207 }' 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.207 04:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.775 [2024-11-27 04:36:21.342163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.775 "name": "Existed_Raid", 00:14:33.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.775 "strip_size_kb": 0, 00:14:33.775 "state": "configuring", 00:14:33.775 "raid_level": "raid1", 00:14:33.775 "superblock": false, 00:14:33.775 "num_base_bdevs": 3, 00:14:33.775 "num_base_bdevs_discovered": 2, 00:14:33.775 "num_base_bdevs_operational": 3, 00:14:33.775 "base_bdevs_list": [ 00:14:33.775 { 00:14:33.775 "name": "BaseBdev1", 00:14:33.775 "uuid": "88750360-10c6-4d5b-9ab6-67da4ffc50c5", 00:14:33.775 "is_configured": true, 00:14:33.775 "data_offset": 0, 00:14:33.775 "data_size": 65536 00:14:33.775 }, 00:14:33.775 { 00:14:33.775 "name": null, 00:14:33.775 "uuid": "e82f6825-908a-40ca-b062-21a01c1cccac", 00:14:33.775 "is_configured": false, 00:14:33.775 "data_offset": 0, 00:14:33.775 "data_size": 65536 00:14:33.775 }, 00:14:33.775 { 00:14:33.775 "name": "BaseBdev3", 00:14:33.775 "uuid": "88e38c8b-bd83-427d-9117-0c668f1bf45f", 00:14:33.775 "is_configured": true, 00:14:33.775 "data_offset": 0, 00:14:33.775 "data_size": 65536 00:14:33.775 } 00:14:33.775 ] 00:14:33.775 }' 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.775 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.341 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.341 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.341 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.341 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:34.341 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.341 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:34.341 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:34.341 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.342 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.342 [2024-11-27 04:36:21.874481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.702 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.703 04:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.703 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.703 "name": "Existed_Raid", 00:14:34.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.703 "strip_size_kb": 0, 00:14:34.703 "state": "configuring", 00:14:34.703 "raid_level": "raid1", 00:14:34.703 "superblock": false, 00:14:34.703 "num_base_bdevs": 3, 00:14:34.703 "num_base_bdevs_discovered": 1, 00:14:34.703 "num_base_bdevs_operational": 3, 00:14:34.703 "base_bdevs_list": [ 00:14:34.703 { 00:14:34.703 "name": null, 00:14:34.703 "uuid": "88750360-10c6-4d5b-9ab6-67da4ffc50c5", 00:14:34.703 "is_configured": false, 00:14:34.703 "data_offset": 0, 00:14:34.703 "data_size": 65536 00:14:34.703 }, 00:14:34.703 { 00:14:34.703 "name": null, 00:14:34.703 "uuid": "e82f6825-908a-40ca-b062-21a01c1cccac", 00:14:34.703 "is_configured": false, 00:14:34.703 "data_offset": 0, 00:14:34.703 "data_size": 65536 00:14:34.703 }, 00:14:34.703 { 00:14:34.703 "name": "BaseBdev3", 00:14:34.703 "uuid": "88e38c8b-bd83-427d-9117-0c668f1bf45f", 00:14:34.703 "is_configured": true, 00:14:34.703 "data_offset": 0, 00:14:34.703 "data_size": 65536 00:14:34.703 } 00:14:34.703 ] 00:14:34.703 }' 00:14:34.703 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.703 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.961 [2024-11-27 04:36:22.534203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.961 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.962 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.962 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.962 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.962 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.962 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.962 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.962 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.962 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.962 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.220 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.220 "name": "Existed_Raid", 00:14:35.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.220 "strip_size_kb": 0, 00:14:35.220 "state": "configuring", 00:14:35.220 "raid_level": "raid1", 00:14:35.220 "superblock": false, 00:14:35.220 "num_base_bdevs": 3, 00:14:35.220 "num_base_bdevs_discovered": 2, 00:14:35.220 "num_base_bdevs_operational": 3, 00:14:35.220 "base_bdevs_list": [ 00:14:35.220 { 00:14:35.220 "name": null, 00:14:35.220 "uuid": "88750360-10c6-4d5b-9ab6-67da4ffc50c5", 00:14:35.220 "is_configured": false, 00:14:35.220 "data_offset": 0, 00:14:35.220 "data_size": 65536 00:14:35.220 }, 00:14:35.220 { 00:14:35.220 "name": "BaseBdev2", 00:14:35.220 "uuid": "e82f6825-908a-40ca-b062-21a01c1cccac", 00:14:35.220 "is_configured": true, 00:14:35.220 "data_offset": 0, 00:14:35.220 "data_size": 65536 00:14:35.220 }, 00:14:35.220 { 00:14:35.220 "name": "BaseBdev3", 00:14:35.220 "uuid": "88e38c8b-bd83-427d-9117-0c668f1bf45f", 00:14:35.220 "is_configured": true, 00:14:35.220 "data_offset": 0, 00:14:35.220 "data_size": 65536 00:14:35.220 } 00:14:35.220 ] 00:14:35.220 }' 00:14:35.220 04:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.220 04:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 88750360-10c6-4d5b-9ab6-67da4ffc50c5 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 [2024-11-27 04:36:23.280176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:35.788 [2024-11-27 04:36:23.280272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:35.788 [2024-11-27 04:36:23.280289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:35.788 [2024-11-27 04:36:23.280644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:35.788 [2024-11-27 04:36:23.280906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:35.788 [2024-11-27 04:36:23.280934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:35.788 [2024-11-27 04:36:23.281289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.788 NewBaseBdev 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.788 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 [ 00:14:35.788 { 00:14:35.788 "name": "NewBaseBdev", 00:14:35.788 "aliases": [ 00:14:35.788 "88750360-10c6-4d5b-9ab6-67da4ffc50c5" 00:14:35.788 ], 00:14:35.788 "product_name": "Malloc disk", 00:14:35.788 "block_size": 512, 00:14:35.788 "num_blocks": 65536, 00:14:35.788 "uuid": "88750360-10c6-4d5b-9ab6-67da4ffc50c5", 00:14:35.788 "assigned_rate_limits": { 00:14:35.788 "rw_ios_per_sec": 0, 00:14:35.788 "rw_mbytes_per_sec": 0, 00:14:35.788 "r_mbytes_per_sec": 0, 00:14:35.788 "w_mbytes_per_sec": 0 00:14:35.788 }, 00:14:35.788 "claimed": true, 00:14:35.788 "claim_type": "exclusive_write", 00:14:35.788 "zoned": false, 00:14:35.788 "supported_io_types": { 00:14:35.788 "read": true, 00:14:35.788 "write": true, 00:14:35.788 "unmap": true, 00:14:35.788 "flush": true, 00:14:35.788 "reset": true, 00:14:35.788 "nvme_admin": false, 00:14:35.788 "nvme_io": false, 00:14:35.788 "nvme_io_md": false, 00:14:35.788 "write_zeroes": true, 00:14:35.788 "zcopy": true, 00:14:35.788 "get_zone_info": false, 00:14:35.788 "zone_management": false, 00:14:35.788 "zone_append": false, 00:14:35.788 "compare": false, 00:14:35.788 "compare_and_write": false, 00:14:35.789 "abort": true, 00:14:35.789 "seek_hole": false, 00:14:35.789 "seek_data": false, 00:14:35.789 "copy": true, 00:14:35.789 "nvme_iov_md": false 00:14:35.789 }, 00:14:35.789 "memory_domains": [ 00:14:35.789 { 00:14:35.789 "dma_device_id": "system", 00:14:35.789 "dma_device_type": 1 00:14:35.789 }, 00:14:35.789 { 00:14:35.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.789 "dma_device_type": 2 00:14:35.789 } 00:14:35.789 ], 00:14:35.789 "driver_specific": {} 00:14:35.789 } 00:14:35.789 ] 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.789 "name": "Existed_Raid", 00:14:35.789 "uuid": "e3a2e523-6681-4130-abc1-c88a77987586", 00:14:35.789 "strip_size_kb": 0, 00:14:35.789 "state": "online", 00:14:35.789 "raid_level": "raid1", 00:14:35.789 "superblock": false, 00:14:35.789 "num_base_bdevs": 3, 00:14:35.789 "num_base_bdevs_discovered": 3, 00:14:35.789 "num_base_bdevs_operational": 3, 00:14:35.789 "base_bdevs_list": [ 00:14:35.789 { 00:14:35.789 "name": "NewBaseBdev", 00:14:35.789 "uuid": "88750360-10c6-4d5b-9ab6-67da4ffc50c5", 00:14:35.789 "is_configured": true, 00:14:35.789 "data_offset": 0, 00:14:35.789 "data_size": 65536 00:14:35.789 }, 00:14:35.789 { 00:14:35.789 "name": "BaseBdev2", 00:14:35.789 "uuid": "e82f6825-908a-40ca-b062-21a01c1cccac", 00:14:35.789 "is_configured": true, 00:14:35.789 "data_offset": 0, 00:14:35.789 "data_size": 65536 00:14:35.789 }, 00:14:35.789 { 00:14:35.789 "name": "BaseBdev3", 00:14:35.789 "uuid": "88e38c8b-bd83-427d-9117-0c668f1bf45f", 00:14:35.789 "is_configured": true, 00:14:35.789 "data_offset": 0, 00:14:35.789 "data_size": 65536 00:14:35.789 } 00:14:35.789 ] 00:14:35.789 }' 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.789 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.355 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:36.355 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:36.355 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:36.355 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:36.355 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:36.355 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:36.355 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:36.355 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:36.355 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.356 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.356 [2024-11-27 04:36:23.832817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.356 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.356 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:36.356 "name": "Existed_Raid", 00:14:36.356 "aliases": [ 00:14:36.356 "e3a2e523-6681-4130-abc1-c88a77987586" 00:14:36.356 ], 00:14:36.356 "product_name": "Raid Volume", 00:14:36.356 "block_size": 512, 00:14:36.356 "num_blocks": 65536, 00:14:36.356 "uuid": "e3a2e523-6681-4130-abc1-c88a77987586", 00:14:36.356 "assigned_rate_limits": { 00:14:36.356 "rw_ios_per_sec": 0, 00:14:36.356 "rw_mbytes_per_sec": 0, 00:14:36.356 "r_mbytes_per_sec": 0, 00:14:36.356 "w_mbytes_per_sec": 0 00:14:36.356 }, 00:14:36.356 "claimed": false, 00:14:36.356 "zoned": false, 00:14:36.356 "supported_io_types": { 00:14:36.356 "read": true, 00:14:36.356 "write": true, 00:14:36.356 "unmap": false, 00:14:36.356 "flush": false, 00:14:36.356 "reset": true, 00:14:36.356 "nvme_admin": false, 00:14:36.356 "nvme_io": false, 00:14:36.356 "nvme_io_md": false, 00:14:36.356 "write_zeroes": true, 00:14:36.356 "zcopy": false, 00:14:36.356 "get_zone_info": false, 00:14:36.356 "zone_management": false, 00:14:36.356 "zone_append": false, 00:14:36.356 "compare": false, 00:14:36.356 "compare_and_write": false, 00:14:36.356 "abort": false, 00:14:36.356 "seek_hole": false, 00:14:36.356 "seek_data": false, 00:14:36.356 "copy": false, 00:14:36.356 "nvme_iov_md": false 00:14:36.356 }, 00:14:36.356 "memory_domains": [ 00:14:36.356 { 00:14:36.356 "dma_device_id": "system", 00:14:36.356 "dma_device_type": 1 00:14:36.356 }, 00:14:36.356 { 00:14:36.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.356 "dma_device_type": 2 00:14:36.356 }, 00:14:36.356 { 00:14:36.356 "dma_device_id": "system", 00:14:36.356 "dma_device_type": 1 00:14:36.356 }, 00:14:36.356 { 00:14:36.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.356 "dma_device_type": 2 00:14:36.356 }, 00:14:36.356 { 00:14:36.356 "dma_device_id": "system", 00:14:36.356 "dma_device_type": 1 00:14:36.356 }, 00:14:36.356 { 00:14:36.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.356 "dma_device_type": 2 00:14:36.356 } 00:14:36.356 ], 00:14:36.356 "driver_specific": { 00:14:36.356 "raid": { 00:14:36.356 "uuid": "e3a2e523-6681-4130-abc1-c88a77987586", 00:14:36.356 "strip_size_kb": 0, 00:14:36.356 "state": "online", 00:14:36.356 "raid_level": "raid1", 00:14:36.356 "superblock": false, 00:14:36.356 "num_base_bdevs": 3, 00:14:36.356 "num_base_bdevs_discovered": 3, 00:14:36.356 "num_base_bdevs_operational": 3, 00:14:36.356 "base_bdevs_list": [ 00:14:36.356 { 00:14:36.356 "name": "NewBaseBdev", 00:14:36.356 "uuid": "88750360-10c6-4d5b-9ab6-67da4ffc50c5", 00:14:36.356 "is_configured": true, 00:14:36.356 "data_offset": 0, 00:14:36.356 "data_size": 65536 00:14:36.356 }, 00:14:36.356 { 00:14:36.356 "name": "BaseBdev2", 00:14:36.356 "uuid": "e82f6825-908a-40ca-b062-21a01c1cccac", 00:14:36.356 "is_configured": true, 00:14:36.356 "data_offset": 0, 00:14:36.356 "data_size": 65536 00:14:36.356 }, 00:14:36.356 { 00:14:36.356 "name": "BaseBdev3", 00:14:36.356 "uuid": "88e38c8b-bd83-427d-9117-0c668f1bf45f", 00:14:36.356 "is_configured": true, 00:14:36.356 "data_offset": 0, 00:14:36.356 "data_size": 65536 00:14:36.356 } 00:14:36.356 ] 00:14:36.356 } 00:14:36.356 } 00:14:36.356 }' 00:14:36.356 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:36.356 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:36.356 BaseBdev2 00:14:36.356 BaseBdev3' 00:14:36.356 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.616 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:36.616 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.616 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:36.616 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.616 04:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.616 04:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.616 [2024-11-27 04:36:24.128478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.616 [2024-11-27 04:36:24.128797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.616 [2024-11-27 04:36:24.128959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.616 [2024-11-27 04:36:24.129402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.616 [2024-11-27 04:36:24.129425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67567 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67567 ']' 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67567 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67567 00:14:36.616 killing process with pid 67567 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67567' 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67567 00:14:36.616 04:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67567 00:14:36.616 [2024-11-27 04:36:24.165469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:36.874 [2024-11-27 04:36:24.457868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.249 ************************************ 00:14:38.250 END TEST raid_state_function_test 00:14:38.250 ************************************ 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:38.250 00:14:38.250 real 0m11.970s 00:14:38.250 user 0m19.704s 00:14:38.250 sys 0m1.681s 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.250 04:36:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:14:38.250 04:36:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:38.250 04:36:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.250 04:36:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.250 ************************************ 00:14:38.250 START TEST raid_state_function_test_sb 00:14:38.250 ************************************ 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:38.250 Process raid pid: 68205 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68205 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68205' 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68205 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:38.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68205 ']' 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.250 04:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.250 [2024-11-27 04:36:25.756522] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:38.250 [2024-11-27 04:36:25.756897] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.509 [2024-11-27 04:36:25.929067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.509 [2024-11-27 04:36:26.082884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.767 [2024-11-27 04:36:26.335725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.767 [2024-11-27 04:36:26.335844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.119 04:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.119 04:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:39.119 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:39.119 04:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.119 04:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.119 [2024-11-27 04:36:26.700483] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:39.119 [2024-11-27 04:36:26.700577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:39.119 [2024-11-27 04:36:26.700596] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.119 [2024-11-27 04:36:26.700615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.119 [2024-11-27 04:36:26.700626] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:39.119 [2024-11-27 04:36:26.700641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:39.119 04:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.119 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:39.119 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.119 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.119 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.120 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.120 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.120 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.120 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.120 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.120 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.120 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.120 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.120 04:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.120 04:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.380 04:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.380 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.380 "name": "Existed_Raid", 00:14:39.380 "uuid": "9278f589-7486-4c35-98fb-df18b58e5d34", 00:14:39.380 "strip_size_kb": 0, 00:14:39.380 "state": "configuring", 00:14:39.380 "raid_level": "raid1", 00:14:39.380 "superblock": true, 00:14:39.380 "num_base_bdevs": 3, 00:14:39.380 "num_base_bdevs_discovered": 0, 00:14:39.380 "num_base_bdevs_operational": 3, 00:14:39.380 "base_bdevs_list": [ 00:14:39.380 { 00:14:39.380 "name": "BaseBdev1", 00:14:39.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.380 "is_configured": false, 00:14:39.380 "data_offset": 0, 00:14:39.380 "data_size": 0 00:14:39.380 }, 00:14:39.380 { 00:14:39.380 "name": "BaseBdev2", 00:14:39.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.380 "is_configured": false, 00:14:39.380 "data_offset": 0, 00:14:39.380 "data_size": 0 00:14:39.380 }, 00:14:39.380 { 00:14:39.380 "name": "BaseBdev3", 00:14:39.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.380 "is_configured": false, 00:14:39.380 "data_offset": 0, 00:14:39.380 "data_size": 0 00:14:39.380 } 00:14:39.380 ] 00:14:39.380 }' 00:14:39.380 04:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.380 04:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.640 [2024-11-27 04:36:27.188491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.640 [2024-11-27 04:36:27.188548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.640 [2024-11-27 04:36:27.196450] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:39.640 [2024-11-27 04:36:27.196508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:39.640 [2024-11-27 04:36:27.196524] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.640 [2024-11-27 04:36:27.196540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.640 [2024-11-27 04:36:27.196550] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:39.640 [2024-11-27 04:36:27.196566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.640 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.640 [2024-11-27 04:36:27.245516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.640 BaseBdev1 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.641 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.899 [ 00:14:39.899 { 00:14:39.899 "name": "BaseBdev1", 00:14:39.899 "aliases": [ 00:14:39.899 "c5ae8ad2-d71b-4f80-9e58-96f698d8b3b9" 00:14:39.899 ], 00:14:39.899 "product_name": "Malloc disk", 00:14:39.899 "block_size": 512, 00:14:39.900 "num_blocks": 65536, 00:14:39.900 "uuid": "c5ae8ad2-d71b-4f80-9e58-96f698d8b3b9", 00:14:39.900 "assigned_rate_limits": { 00:14:39.900 "rw_ios_per_sec": 0, 00:14:39.900 "rw_mbytes_per_sec": 0, 00:14:39.900 "r_mbytes_per_sec": 0, 00:14:39.900 "w_mbytes_per_sec": 0 00:14:39.900 }, 00:14:39.900 "claimed": true, 00:14:39.900 "claim_type": "exclusive_write", 00:14:39.900 "zoned": false, 00:14:39.900 "supported_io_types": { 00:14:39.900 "read": true, 00:14:39.900 "write": true, 00:14:39.900 "unmap": true, 00:14:39.900 "flush": true, 00:14:39.900 "reset": true, 00:14:39.900 "nvme_admin": false, 00:14:39.900 "nvme_io": false, 00:14:39.900 "nvme_io_md": false, 00:14:39.900 "write_zeroes": true, 00:14:39.900 "zcopy": true, 00:14:39.900 "get_zone_info": false, 00:14:39.900 "zone_management": false, 00:14:39.900 "zone_append": false, 00:14:39.900 "compare": false, 00:14:39.900 "compare_and_write": false, 00:14:39.900 "abort": true, 00:14:39.900 "seek_hole": false, 00:14:39.900 "seek_data": false, 00:14:39.900 "copy": true, 00:14:39.900 "nvme_iov_md": false 00:14:39.900 }, 00:14:39.900 "memory_domains": [ 00:14:39.900 { 00:14:39.900 "dma_device_id": "system", 00:14:39.900 "dma_device_type": 1 00:14:39.900 }, 00:14:39.900 { 00:14:39.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.900 "dma_device_type": 2 00:14:39.900 } 00:14:39.900 ], 00:14:39.900 "driver_specific": {} 00:14:39.900 } 00:14:39.900 ] 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.900 "name": "Existed_Raid", 00:14:39.900 "uuid": "f308b85c-f0d7-490b-bbaa-75be60120550", 00:14:39.900 "strip_size_kb": 0, 00:14:39.900 "state": "configuring", 00:14:39.900 "raid_level": "raid1", 00:14:39.900 "superblock": true, 00:14:39.900 "num_base_bdevs": 3, 00:14:39.900 "num_base_bdevs_discovered": 1, 00:14:39.900 "num_base_bdevs_operational": 3, 00:14:39.900 "base_bdevs_list": [ 00:14:39.900 { 00:14:39.900 "name": "BaseBdev1", 00:14:39.900 "uuid": "c5ae8ad2-d71b-4f80-9e58-96f698d8b3b9", 00:14:39.900 "is_configured": true, 00:14:39.900 "data_offset": 2048, 00:14:39.900 "data_size": 63488 00:14:39.900 }, 00:14:39.900 { 00:14:39.900 "name": "BaseBdev2", 00:14:39.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.900 "is_configured": false, 00:14:39.900 "data_offset": 0, 00:14:39.900 "data_size": 0 00:14:39.900 }, 00:14:39.900 { 00:14:39.900 "name": "BaseBdev3", 00:14:39.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.900 "is_configured": false, 00:14:39.900 "data_offset": 0, 00:14:39.900 "data_size": 0 00:14:39.900 } 00:14:39.900 ] 00:14:39.900 }' 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.900 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.467 [2024-11-27 04:36:27.793758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:40.467 [2024-11-27 04:36:27.793886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.467 [2024-11-27 04:36:27.801765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.467 [2024-11-27 04:36:27.804474] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.467 [2024-11-27 04:36:27.804533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.467 [2024-11-27 04:36:27.804551] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.467 [2024-11-27 04:36:27.804567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.467 "name": "Existed_Raid", 00:14:40.467 "uuid": "81d13908-5da3-43b3-bc9a-b457e26e3e1e", 00:14:40.467 "strip_size_kb": 0, 00:14:40.467 "state": "configuring", 00:14:40.467 "raid_level": "raid1", 00:14:40.467 "superblock": true, 00:14:40.467 "num_base_bdevs": 3, 00:14:40.467 "num_base_bdevs_discovered": 1, 00:14:40.467 "num_base_bdevs_operational": 3, 00:14:40.467 "base_bdevs_list": [ 00:14:40.467 { 00:14:40.467 "name": "BaseBdev1", 00:14:40.467 "uuid": "c5ae8ad2-d71b-4f80-9e58-96f698d8b3b9", 00:14:40.467 "is_configured": true, 00:14:40.467 "data_offset": 2048, 00:14:40.467 "data_size": 63488 00:14:40.467 }, 00:14:40.467 { 00:14:40.467 "name": "BaseBdev2", 00:14:40.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.467 "is_configured": false, 00:14:40.467 "data_offset": 0, 00:14:40.467 "data_size": 0 00:14:40.467 }, 00:14:40.467 { 00:14:40.467 "name": "BaseBdev3", 00:14:40.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.467 "is_configured": false, 00:14:40.467 "data_offset": 0, 00:14:40.467 "data_size": 0 00:14:40.467 } 00:14:40.467 ] 00:14:40.467 }' 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.467 04:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.725 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:40.725 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.725 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.982 [2024-11-27 04:36:28.363868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.982 BaseBdev2 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.982 [ 00:14:40.982 { 00:14:40.982 "name": "BaseBdev2", 00:14:40.982 "aliases": [ 00:14:40.982 "1736b675-44d5-434e-964c-0da7a3af3664" 00:14:40.982 ], 00:14:40.982 "product_name": "Malloc disk", 00:14:40.982 "block_size": 512, 00:14:40.982 "num_blocks": 65536, 00:14:40.982 "uuid": "1736b675-44d5-434e-964c-0da7a3af3664", 00:14:40.982 "assigned_rate_limits": { 00:14:40.982 "rw_ios_per_sec": 0, 00:14:40.982 "rw_mbytes_per_sec": 0, 00:14:40.982 "r_mbytes_per_sec": 0, 00:14:40.982 "w_mbytes_per_sec": 0 00:14:40.982 }, 00:14:40.982 "claimed": true, 00:14:40.982 "claim_type": "exclusive_write", 00:14:40.982 "zoned": false, 00:14:40.982 "supported_io_types": { 00:14:40.982 "read": true, 00:14:40.982 "write": true, 00:14:40.982 "unmap": true, 00:14:40.982 "flush": true, 00:14:40.982 "reset": true, 00:14:40.982 "nvme_admin": false, 00:14:40.982 "nvme_io": false, 00:14:40.982 "nvme_io_md": false, 00:14:40.982 "write_zeroes": true, 00:14:40.982 "zcopy": true, 00:14:40.982 "get_zone_info": false, 00:14:40.982 "zone_management": false, 00:14:40.982 "zone_append": false, 00:14:40.982 "compare": false, 00:14:40.982 "compare_and_write": false, 00:14:40.982 "abort": true, 00:14:40.982 "seek_hole": false, 00:14:40.982 "seek_data": false, 00:14:40.982 "copy": true, 00:14:40.982 "nvme_iov_md": false 00:14:40.982 }, 00:14:40.982 "memory_domains": [ 00:14:40.982 { 00:14:40.982 "dma_device_id": "system", 00:14:40.982 "dma_device_type": 1 00:14:40.982 }, 00:14:40.982 { 00:14:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.982 "dma_device_type": 2 00:14:40.982 } 00:14:40.982 ], 00:14:40.982 "driver_specific": {} 00:14:40.982 } 00:14:40.982 ] 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.982 "name": "Existed_Raid", 00:14:40.982 "uuid": "81d13908-5da3-43b3-bc9a-b457e26e3e1e", 00:14:40.982 "strip_size_kb": 0, 00:14:40.982 "state": "configuring", 00:14:40.982 "raid_level": "raid1", 00:14:40.982 "superblock": true, 00:14:40.982 "num_base_bdevs": 3, 00:14:40.982 "num_base_bdevs_discovered": 2, 00:14:40.982 "num_base_bdevs_operational": 3, 00:14:40.982 "base_bdevs_list": [ 00:14:40.982 { 00:14:40.982 "name": "BaseBdev1", 00:14:40.982 "uuid": "c5ae8ad2-d71b-4f80-9e58-96f698d8b3b9", 00:14:40.982 "is_configured": true, 00:14:40.982 "data_offset": 2048, 00:14:40.982 "data_size": 63488 00:14:40.982 }, 00:14:40.982 { 00:14:40.982 "name": "BaseBdev2", 00:14:40.982 "uuid": "1736b675-44d5-434e-964c-0da7a3af3664", 00:14:40.982 "is_configured": true, 00:14:40.982 "data_offset": 2048, 00:14:40.982 "data_size": 63488 00:14:40.982 }, 00:14:40.982 { 00:14:40.982 "name": "BaseBdev3", 00:14:40.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.982 "is_configured": false, 00:14:40.982 "data_offset": 0, 00:14:40.982 "data_size": 0 00:14:40.982 } 00:14:40.982 ] 00:14:40.982 }' 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.982 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.549 [2024-11-27 04:36:28.972985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.549 [2024-11-27 04:36:28.973358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:41.549 [2024-11-27 04:36:28.973394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:41.549 [2024-11-27 04:36:28.973847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:41.549 BaseBdev3 00:14:41.549 [2024-11-27 04:36:28.974106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:41.549 [2024-11-27 04:36:28.974126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:41.549 [2024-11-27 04:36:28.974381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.549 04:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.549 [ 00:14:41.549 { 00:14:41.549 "name": "BaseBdev3", 00:14:41.549 "aliases": [ 00:14:41.549 "fa962ca0-b187-412e-806e-4142cc030f07" 00:14:41.549 ], 00:14:41.549 "product_name": "Malloc disk", 00:14:41.549 "block_size": 512, 00:14:41.549 "num_blocks": 65536, 00:14:41.549 "uuid": "fa962ca0-b187-412e-806e-4142cc030f07", 00:14:41.549 "assigned_rate_limits": { 00:14:41.549 "rw_ios_per_sec": 0, 00:14:41.549 "rw_mbytes_per_sec": 0, 00:14:41.549 "r_mbytes_per_sec": 0, 00:14:41.549 "w_mbytes_per_sec": 0 00:14:41.549 }, 00:14:41.549 "claimed": true, 00:14:41.549 "claim_type": "exclusive_write", 00:14:41.549 "zoned": false, 00:14:41.549 "supported_io_types": { 00:14:41.549 "read": true, 00:14:41.549 "write": true, 00:14:41.549 "unmap": true, 00:14:41.549 "flush": true, 00:14:41.549 "reset": true, 00:14:41.549 "nvme_admin": false, 00:14:41.549 "nvme_io": false, 00:14:41.549 "nvme_io_md": false, 00:14:41.549 "write_zeroes": true, 00:14:41.549 "zcopy": true, 00:14:41.549 "get_zone_info": false, 00:14:41.549 "zone_management": false, 00:14:41.549 "zone_append": false, 00:14:41.549 "compare": false, 00:14:41.549 "compare_and_write": false, 00:14:41.549 "abort": true, 00:14:41.549 "seek_hole": false, 00:14:41.549 "seek_data": false, 00:14:41.549 "copy": true, 00:14:41.549 "nvme_iov_md": false 00:14:41.549 }, 00:14:41.549 "memory_domains": [ 00:14:41.549 { 00:14:41.549 "dma_device_id": "system", 00:14:41.549 "dma_device_type": 1 00:14:41.549 }, 00:14:41.549 { 00:14:41.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.549 "dma_device_type": 2 00:14:41.549 } 00:14:41.549 ], 00:14:41.549 "driver_specific": {} 00:14:41.549 } 00:14:41.549 ] 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.549 "name": "Existed_Raid", 00:14:41.549 "uuid": "81d13908-5da3-43b3-bc9a-b457e26e3e1e", 00:14:41.549 "strip_size_kb": 0, 00:14:41.549 "state": "online", 00:14:41.549 "raid_level": "raid1", 00:14:41.549 "superblock": true, 00:14:41.549 "num_base_bdevs": 3, 00:14:41.549 "num_base_bdevs_discovered": 3, 00:14:41.549 "num_base_bdevs_operational": 3, 00:14:41.549 "base_bdevs_list": [ 00:14:41.549 { 00:14:41.549 "name": "BaseBdev1", 00:14:41.549 "uuid": "c5ae8ad2-d71b-4f80-9e58-96f698d8b3b9", 00:14:41.549 "is_configured": true, 00:14:41.549 "data_offset": 2048, 00:14:41.549 "data_size": 63488 00:14:41.549 }, 00:14:41.549 { 00:14:41.549 "name": "BaseBdev2", 00:14:41.549 "uuid": "1736b675-44d5-434e-964c-0da7a3af3664", 00:14:41.549 "is_configured": true, 00:14:41.549 "data_offset": 2048, 00:14:41.549 "data_size": 63488 00:14:41.549 }, 00:14:41.549 { 00:14:41.549 "name": "BaseBdev3", 00:14:41.549 "uuid": "fa962ca0-b187-412e-806e-4142cc030f07", 00:14:41.549 "is_configured": true, 00:14:41.549 "data_offset": 2048, 00:14:41.549 "data_size": 63488 00:14:41.549 } 00:14:41.549 ] 00:14:41.549 }' 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.549 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.113 [2024-11-27 04:36:29.537602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.113 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:42.113 "name": "Existed_Raid", 00:14:42.113 "aliases": [ 00:14:42.113 "81d13908-5da3-43b3-bc9a-b457e26e3e1e" 00:14:42.113 ], 00:14:42.113 "product_name": "Raid Volume", 00:14:42.113 "block_size": 512, 00:14:42.113 "num_blocks": 63488, 00:14:42.113 "uuid": "81d13908-5da3-43b3-bc9a-b457e26e3e1e", 00:14:42.113 "assigned_rate_limits": { 00:14:42.113 "rw_ios_per_sec": 0, 00:14:42.113 "rw_mbytes_per_sec": 0, 00:14:42.113 "r_mbytes_per_sec": 0, 00:14:42.113 "w_mbytes_per_sec": 0 00:14:42.113 }, 00:14:42.113 "claimed": false, 00:14:42.113 "zoned": false, 00:14:42.113 "supported_io_types": { 00:14:42.113 "read": true, 00:14:42.113 "write": true, 00:14:42.113 "unmap": false, 00:14:42.113 "flush": false, 00:14:42.113 "reset": true, 00:14:42.113 "nvme_admin": false, 00:14:42.113 "nvme_io": false, 00:14:42.113 "nvme_io_md": false, 00:14:42.113 "write_zeroes": true, 00:14:42.113 "zcopy": false, 00:14:42.113 "get_zone_info": false, 00:14:42.113 "zone_management": false, 00:14:42.113 "zone_append": false, 00:14:42.113 "compare": false, 00:14:42.113 "compare_and_write": false, 00:14:42.113 "abort": false, 00:14:42.113 "seek_hole": false, 00:14:42.113 "seek_data": false, 00:14:42.113 "copy": false, 00:14:42.113 "nvme_iov_md": false 00:14:42.113 }, 00:14:42.113 "memory_domains": [ 00:14:42.113 { 00:14:42.113 "dma_device_id": "system", 00:14:42.113 "dma_device_type": 1 00:14:42.113 }, 00:14:42.113 { 00:14:42.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.114 "dma_device_type": 2 00:14:42.114 }, 00:14:42.114 { 00:14:42.114 "dma_device_id": "system", 00:14:42.114 "dma_device_type": 1 00:14:42.114 }, 00:14:42.114 { 00:14:42.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.114 "dma_device_type": 2 00:14:42.114 }, 00:14:42.114 { 00:14:42.114 "dma_device_id": "system", 00:14:42.114 "dma_device_type": 1 00:14:42.114 }, 00:14:42.114 { 00:14:42.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.114 "dma_device_type": 2 00:14:42.114 } 00:14:42.114 ], 00:14:42.114 "driver_specific": { 00:14:42.114 "raid": { 00:14:42.114 "uuid": "81d13908-5da3-43b3-bc9a-b457e26e3e1e", 00:14:42.114 "strip_size_kb": 0, 00:14:42.114 "state": "online", 00:14:42.114 "raid_level": "raid1", 00:14:42.114 "superblock": true, 00:14:42.114 "num_base_bdevs": 3, 00:14:42.114 "num_base_bdevs_discovered": 3, 00:14:42.114 "num_base_bdevs_operational": 3, 00:14:42.114 "base_bdevs_list": [ 00:14:42.114 { 00:14:42.114 "name": "BaseBdev1", 00:14:42.114 "uuid": "c5ae8ad2-d71b-4f80-9e58-96f698d8b3b9", 00:14:42.114 "is_configured": true, 00:14:42.114 "data_offset": 2048, 00:14:42.114 "data_size": 63488 00:14:42.114 }, 00:14:42.114 { 00:14:42.114 "name": "BaseBdev2", 00:14:42.114 "uuid": "1736b675-44d5-434e-964c-0da7a3af3664", 00:14:42.114 "is_configured": true, 00:14:42.114 "data_offset": 2048, 00:14:42.114 "data_size": 63488 00:14:42.114 }, 00:14:42.114 { 00:14:42.114 "name": "BaseBdev3", 00:14:42.114 "uuid": "fa962ca0-b187-412e-806e-4142cc030f07", 00:14:42.114 "is_configured": true, 00:14:42.114 "data_offset": 2048, 00:14:42.114 "data_size": 63488 00:14:42.114 } 00:14:42.114 ] 00:14:42.114 } 00:14:42.114 } 00:14:42.114 }' 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:42.114 BaseBdev2 00:14:42.114 BaseBdev3' 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.114 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.372 [2024-11-27 04:36:29.829365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:42.372 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.373 "name": "Existed_Raid", 00:14:42.373 "uuid": "81d13908-5da3-43b3-bc9a-b457e26e3e1e", 00:14:42.373 "strip_size_kb": 0, 00:14:42.373 "state": "online", 00:14:42.373 "raid_level": "raid1", 00:14:42.373 "superblock": true, 00:14:42.373 "num_base_bdevs": 3, 00:14:42.373 "num_base_bdevs_discovered": 2, 00:14:42.373 "num_base_bdevs_operational": 2, 00:14:42.373 "base_bdevs_list": [ 00:14:42.373 { 00:14:42.373 "name": null, 00:14:42.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.373 "is_configured": false, 00:14:42.373 "data_offset": 0, 00:14:42.373 "data_size": 63488 00:14:42.373 }, 00:14:42.373 { 00:14:42.373 "name": "BaseBdev2", 00:14:42.373 "uuid": "1736b675-44d5-434e-964c-0da7a3af3664", 00:14:42.373 "is_configured": true, 00:14:42.373 "data_offset": 2048, 00:14:42.373 "data_size": 63488 00:14:42.373 }, 00:14:42.373 { 00:14:42.373 "name": "BaseBdev3", 00:14:42.373 "uuid": "fa962ca0-b187-412e-806e-4142cc030f07", 00:14:42.373 "is_configured": true, 00:14:42.373 "data_offset": 2048, 00:14:42.373 "data_size": 63488 00:14:42.373 } 00:14:42.373 ] 00:14:42.373 }' 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.373 04:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.937 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.937 [2024-11-27 04:36:30.482846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.195 [2024-11-27 04:36:30.628964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:43.195 [2024-11-27 04:36:30.629104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.195 [2024-11-27 04:36:30.712456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.195 [2024-11-27 04:36:30.712742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.195 [2024-11-27 04:36:30.712920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.195 BaseBdev2 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.195 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.453 [ 00:14:43.453 { 00:14:43.453 "name": "BaseBdev2", 00:14:43.453 "aliases": [ 00:14:43.453 "29562eac-b6c4-480b-8421-fa8496c2a3e9" 00:14:43.453 ], 00:14:43.453 "product_name": "Malloc disk", 00:14:43.453 "block_size": 512, 00:14:43.453 "num_blocks": 65536, 00:14:43.453 "uuid": "29562eac-b6c4-480b-8421-fa8496c2a3e9", 00:14:43.453 "assigned_rate_limits": { 00:14:43.453 "rw_ios_per_sec": 0, 00:14:43.453 "rw_mbytes_per_sec": 0, 00:14:43.453 "r_mbytes_per_sec": 0, 00:14:43.453 "w_mbytes_per_sec": 0 00:14:43.453 }, 00:14:43.453 "claimed": false, 00:14:43.453 "zoned": false, 00:14:43.453 "supported_io_types": { 00:14:43.453 "read": true, 00:14:43.453 "write": true, 00:14:43.453 "unmap": true, 00:14:43.453 "flush": true, 00:14:43.453 "reset": true, 00:14:43.453 "nvme_admin": false, 00:14:43.453 "nvme_io": false, 00:14:43.453 "nvme_io_md": false, 00:14:43.453 "write_zeroes": true, 00:14:43.453 "zcopy": true, 00:14:43.453 "get_zone_info": false, 00:14:43.453 "zone_management": false, 00:14:43.453 "zone_append": false, 00:14:43.453 "compare": false, 00:14:43.453 "compare_and_write": false, 00:14:43.453 "abort": true, 00:14:43.453 "seek_hole": false, 00:14:43.453 "seek_data": false, 00:14:43.453 "copy": true, 00:14:43.453 "nvme_iov_md": false 00:14:43.453 }, 00:14:43.453 "memory_domains": [ 00:14:43.453 { 00:14:43.453 "dma_device_id": "system", 00:14:43.453 "dma_device_type": 1 00:14:43.453 }, 00:14:43.453 { 00:14:43.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.453 "dma_device_type": 2 00:14:43.453 } 00:14:43.453 ], 00:14:43.453 "driver_specific": {} 00:14:43.453 } 00:14:43.453 ] 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.453 BaseBdev3 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.453 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.454 [ 00:14:43.454 { 00:14:43.454 "name": "BaseBdev3", 00:14:43.454 "aliases": [ 00:14:43.454 "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2" 00:14:43.454 ], 00:14:43.454 "product_name": "Malloc disk", 00:14:43.454 "block_size": 512, 00:14:43.454 "num_blocks": 65536, 00:14:43.454 "uuid": "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2", 00:14:43.454 "assigned_rate_limits": { 00:14:43.454 "rw_ios_per_sec": 0, 00:14:43.454 "rw_mbytes_per_sec": 0, 00:14:43.454 "r_mbytes_per_sec": 0, 00:14:43.454 "w_mbytes_per_sec": 0 00:14:43.454 }, 00:14:43.454 "claimed": false, 00:14:43.454 "zoned": false, 00:14:43.454 "supported_io_types": { 00:14:43.454 "read": true, 00:14:43.454 "write": true, 00:14:43.454 "unmap": true, 00:14:43.454 "flush": true, 00:14:43.454 "reset": true, 00:14:43.454 "nvme_admin": false, 00:14:43.454 "nvme_io": false, 00:14:43.454 "nvme_io_md": false, 00:14:43.454 "write_zeroes": true, 00:14:43.454 "zcopy": true, 00:14:43.454 "get_zone_info": false, 00:14:43.454 "zone_management": false, 00:14:43.454 "zone_append": false, 00:14:43.454 "compare": false, 00:14:43.454 "compare_and_write": false, 00:14:43.454 "abort": true, 00:14:43.454 "seek_hole": false, 00:14:43.454 "seek_data": false, 00:14:43.454 "copy": true, 00:14:43.454 "nvme_iov_md": false 00:14:43.454 }, 00:14:43.454 "memory_domains": [ 00:14:43.454 { 00:14:43.454 "dma_device_id": "system", 00:14:43.454 "dma_device_type": 1 00:14:43.454 }, 00:14:43.454 { 00:14:43.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.454 "dma_device_type": 2 00:14:43.454 } 00:14:43.454 ], 00:14:43.454 "driver_specific": {} 00:14:43.454 } 00:14:43.454 ] 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.454 [2024-11-27 04:36:30.919074] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.454 [2024-11-27 04:36:30.919261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.454 [2024-11-27 04:36:30.919389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.454 [2024-11-27 04:36:30.921948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.454 "name": "Existed_Raid", 00:14:43.454 "uuid": "d5dfc265-c1fa-43bf-9d57-976aef7344d8", 00:14:43.454 "strip_size_kb": 0, 00:14:43.454 "state": "configuring", 00:14:43.454 "raid_level": "raid1", 00:14:43.454 "superblock": true, 00:14:43.454 "num_base_bdevs": 3, 00:14:43.454 "num_base_bdevs_discovered": 2, 00:14:43.454 "num_base_bdevs_operational": 3, 00:14:43.454 "base_bdevs_list": [ 00:14:43.454 { 00:14:43.454 "name": "BaseBdev1", 00:14:43.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.454 "is_configured": false, 00:14:43.454 "data_offset": 0, 00:14:43.454 "data_size": 0 00:14:43.454 }, 00:14:43.454 { 00:14:43.454 "name": "BaseBdev2", 00:14:43.454 "uuid": "29562eac-b6c4-480b-8421-fa8496c2a3e9", 00:14:43.454 "is_configured": true, 00:14:43.454 "data_offset": 2048, 00:14:43.454 "data_size": 63488 00:14:43.454 }, 00:14:43.454 { 00:14:43.454 "name": "BaseBdev3", 00:14:43.454 "uuid": "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2", 00:14:43.454 "is_configured": true, 00:14:43.454 "data_offset": 2048, 00:14:43.454 "data_size": 63488 00:14:43.454 } 00:14:43.454 ] 00:14:43.454 }' 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.454 04:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.019 [2024-11-27 04:36:31.451224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.019 "name": "Existed_Raid", 00:14:44.019 "uuid": "d5dfc265-c1fa-43bf-9d57-976aef7344d8", 00:14:44.019 "strip_size_kb": 0, 00:14:44.019 "state": "configuring", 00:14:44.019 "raid_level": "raid1", 00:14:44.019 "superblock": true, 00:14:44.019 "num_base_bdevs": 3, 00:14:44.019 "num_base_bdevs_discovered": 1, 00:14:44.019 "num_base_bdevs_operational": 3, 00:14:44.019 "base_bdevs_list": [ 00:14:44.019 { 00:14:44.019 "name": "BaseBdev1", 00:14:44.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.019 "is_configured": false, 00:14:44.019 "data_offset": 0, 00:14:44.019 "data_size": 0 00:14:44.019 }, 00:14:44.019 { 00:14:44.019 "name": null, 00:14:44.019 "uuid": "29562eac-b6c4-480b-8421-fa8496c2a3e9", 00:14:44.019 "is_configured": false, 00:14:44.019 "data_offset": 0, 00:14:44.019 "data_size": 63488 00:14:44.019 }, 00:14:44.019 { 00:14:44.019 "name": "BaseBdev3", 00:14:44.019 "uuid": "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2", 00:14:44.019 "is_configured": true, 00:14:44.019 "data_offset": 2048, 00:14:44.019 "data_size": 63488 00:14:44.019 } 00:14:44.019 ] 00:14:44.019 }' 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.019 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.584 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:44.584 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.584 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.584 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.584 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.584 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:44.584 04:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:44.584 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.584 04:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.584 [2024-11-27 04:36:32.020888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.584 BaseBdev1 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.584 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.585 [ 00:14:44.585 { 00:14:44.585 "name": "BaseBdev1", 00:14:44.585 "aliases": [ 00:14:44.585 "4188e473-5e64-4e1f-84e1-cc0d5f59f88e" 00:14:44.585 ], 00:14:44.585 "product_name": "Malloc disk", 00:14:44.585 "block_size": 512, 00:14:44.585 "num_blocks": 65536, 00:14:44.585 "uuid": "4188e473-5e64-4e1f-84e1-cc0d5f59f88e", 00:14:44.585 "assigned_rate_limits": { 00:14:44.585 "rw_ios_per_sec": 0, 00:14:44.585 "rw_mbytes_per_sec": 0, 00:14:44.585 "r_mbytes_per_sec": 0, 00:14:44.585 "w_mbytes_per_sec": 0 00:14:44.585 }, 00:14:44.585 "claimed": true, 00:14:44.585 "claim_type": "exclusive_write", 00:14:44.585 "zoned": false, 00:14:44.585 "supported_io_types": { 00:14:44.585 "read": true, 00:14:44.585 "write": true, 00:14:44.585 "unmap": true, 00:14:44.585 "flush": true, 00:14:44.585 "reset": true, 00:14:44.585 "nvme_admin": false, 00:14:44.585 "nvme_io": false, 00:14:44.585 "nvme_io_md": false, 00:14:44.585 "write_zeroes": true, 00:14:44.585 "zcopy": true, 00:14:44.585 "get_zone_info": false, 00:14:44.585 "zone_management": false, 00:14:44.585 "zone_append": false, 00:14:44.585 "compare": false, 00:14:44.585 "compare_and_write": false, 00:14:44.585 "abort": true, 00:14:44.585 "seek_hole": false, 00:14:44.585 "seek_data": false, 00:14:44.585 "copy": true, 00:14:44.585 "nvme_iov_md": false 00:14:44.585 }, 00:14:44.585 "memory_domains": [ 00:14:44.585 { 00:14:44.585 "dma_device_id": "system", 00:14:44.585 "dma_device_type": 1 00:14:44.585 }, 00:14:44.585 { 00:14:44.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.585 "dma_device_type": 2 00:14:44.585 } 00:14:44.585 ], 00:14:44.585 "driver_specific": {} 00:14:44.585 } 00:14:44.585 ] 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.585 "name": "Existed_Raid", 00:14:44.585 "uuid": "d5dfc265-c1fa-43bf-9d57-976aef7344d8", 00:14:44.585 "strip_size_kb": 0, 00:14:44.585 "state": "configuring", 00:14:44.585 "raid_level": "raid1", 00:14:44.585 "superblock": true, 00:14:44.585 "num_base_bdevs": 3, 00:14:44.585 "num_base_bdevs_discovered": 2, 00:14:44.585 "num_base_bdevs_operational": 3, 00:14:44.585 "base_bdevs_list": [ 00:14:44.585 { 00:14:44.585 "name": "BaseBdev1", 00:14:44.585 "uuid": "4188e473-5e64-4e1f-84e1-cc0d5f59f88e", 00:14:44.585 "is_configured": true, 00:14:44.585 "data_offset": 2048, 00:14:44.585 "data_size": 63488 00:14:44.585 }, 00:14:44.585 { 00:14:44.585 "name": null, 00:14:44.585 "uuid": "29562eac-b6c4-480b-8421-fa8496c2a3e9", 00:14:44.585 "is_configured": false, 00:14:44.585 "data_offset": 0, 00:14:44.585 "data_size": 63488 00:14:44.585 }, 00:14:44.585 { 00:14:44.585 "name": "BaseBdev3", 00:14:44.585 "uuid": "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2", 00:14:44.585 "is_configured": true, 00:14:44.585 "data_offset": 2048, 00:14:44.585 "data_size": 63488 00:14:44.585 } 00:14:44.585 ] 00:14:44.585 }' 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.585 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.151 [2024-11-27 04:36:32.573054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.151 "name": "Existed_Raid", 00:14:45.151 "uuid": "d5dfc265-c1fa-43bf-9d57-976aef7344d8", 00:14:45.151 "strip_size_kb": 0, 00:14:45.151 "state": "configuring", 00:14:45.151 "raid_level": "raid1", 00:14:45.151 "superblock": true, 00:14:45.151 "num_base_bdevs": 3, 00:14:45.151 "num_base_bdevs_discovered": 1, 00:14:45.151 "num_base_bdevs_operational": 3, 00:14:45.151 "base_bdevs_list": [ 00:14:45.151 { 00:14:45.151 "name": "BaseBdev1", 00:14:45.151 "uuid": "4188e473-5e64-4e1f-84e1-cc0d5f59f88e", 00:14:45.151 "is_configured": true, 00:14:45.151 "data_offset": 2048, 00:14:45.151 "data_size": 63488 00:14:45.151 }, 00:14:45.151 { 00:14:45.151 "name": null, 00:14:45.151 "uuid": "29562eac-b6c4-480b-8421-fa8496c2a3e9", 00:14:45.151 "is_configured": false, 00:14:45.151 "data_offset": 0, 00:14:45.151 "data_size": 63488 00:14:45.151 }, 00:14:45.151 { 00:14:45.151 "name": null, 00:14:45.151 "uuid": "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2", 00:14:45.151 "is_configured": false, 00:14:45.151 "data_offset": 0, 00:14:45.151 "data_size": 63488 00:14:45.151 } 00:14:45.151 ] 00:14:45.151 }' 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.151 04:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.410 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.410 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.410 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.410 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.668 [2024-11-27 04:36:33.105244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.668 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.668 "name": "Existed_Raid", 00:14:45.668 "uuid": "d5dfc265-c1fa-43bf-9d57-976aef7344d8", 00:14:45.668 "strip_size_kb": 0, 00:14:45.668 "state": "configuring", 00:14:45.668 "raid_level": "raid1", 00:14:45.668 "superblock": true, 00:14:45.668 "num_base_bdevs": 3, 00:14:45.668 "num_base_bdevs_discovered": 2, 00:14:45.668 "num_base_bdevs_operational": 3, 00:14:45.668 "base_bdevs_list": [ 00:14:45.668 { 00:14:45.668 "name": "BaseBdev1", 00:14:45.668 "uuid": "4188e473-5e64-4e1f-84e1-cc0d5f59f88e", 00:14:45.668 "is_configured": true, 00:14:45.668 "data_offset": 2048, 00:14:45.668 "data_size": 63488 00:14:45.668 }, 00:14:45.668 { 00:14:45.668 "name": null, 00:14:45.668 "uuid": "29562eac-b6c4-480b-8421-fa8496c2a3e9", 00:14:45.668 "is_configured": false, 00:14:45.668 "data_offset": 0, 00:14:45.668 "data_size": 63488 00:14:45.668 }, 00:14:45.668 { 00:14:45.668 "name": "BaseBdev3", 00:14:45.669 "uuid": "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2", 00:14:45.669 "is_configured": true, 00:14:45.669 "data_offset": 2048, 00:14:45.669 "data_size": 63488 00:14:45.669 } 00:14:45.669 ] 00:14:45.669 }' 00:14:45.669 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.669 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.234 [2024-11-27 04:36:33.705424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.234 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.234 "name": "Existed_Raid", 00:14:46.234 "uuid": "d5dfc265-c1fa-43bf-9d57-976aef7344d8", 00:14:46.234 "strip_size_kb": 0, 00:14:46.234 "state": "configuring", 00:14:46.234 "raid_level": "raid1", 00:14:46.234 "superblock": true, 00:14:46.234 "num_base_bdevs": 3, 00:14:46.234 "num_base_bdevs_discovered": 1, 00:14:46.234 "num_base_bdevs_operational": 3, 00:14:46.234 "base_bdevs_list": [ 00:14:46.234 { 00:14:46.234 "name": null, 00:14:46.234 "uuid": "4188e473-5e64-4e1f-84e1-cc0d5f59f88e", 00:14:46.234 "is_configured": false, 00:14:46.234 "data_offset": 0, 00:14:46.234 "data_size": 63488 00:14:46.234 }, 00:14:46.234 { 00:14:46.234 "name": null, 00:14:46.234 "uuid": "29562eac-b6c4-480b-8421-fa8496c2a3e9", 00:14:46.234 "is_configured": false, 00:14:46.234 "data_offset": 0, 00:14:46.234 "data_size": 63488 00:14:46.234 }, 00:14:46.234 { 00:14:46.235 "name": "BaseBdev3", 00:14:46.235 "uuid": "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2", 00:14:46.235 "is_configured": true, 00:14:46.235 "data_offset": 2048, 00:14:46.235 "data_size": 63488 00:14:46.235 } 00:14:46.235 ] 00:14:46.235 }' 00:14:46.235 04:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.235 04:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.799 [2024-11-27 04:36:34.330414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.799 "name": "Existed_Raid", 00:14:46.799 "uuid": "d5dfc265-c1fa-43bf-9d57-976aef7344d8", 00:14:46.799 "strip_size_kb": 0, 00:14:46.799 "state": "configuring", 00:14:46.799 "raid_level": "raid1", 00:14:46.799 "superblock": true, 00:14:46.799 "num_base_bdevs": 3, 00:14:46.799 "num_base_bdevs_discovered": 2, 00:14:46.799 "num_base_bdevs_operational": 3, 00:14:46.799 "base_bdevs_list": [ 00:14:46.799 { 00:14:46.799 "name": null, 00:14:46.799 "uuid": "4188e473-5e64-4e1f-84e1-cc0d5f59f88e", 00:14:46.799 "is_configured": false, 00:14:46.799 "data_offset": 0, 00:14:46.799 "data_size": 63488 00:14:46.799 }, 00:14:46.799 { 00:14:46.799 "name": "BaseBdev2", 00:14:46.799 "uuid": "29562eac-b6c4-480b-8421-fa8496c2a3e9", 00:14:46.799 "is_configured": true, 00:14:46.799 "data_offset": 2048, 00:14:46.799 "data_size": 63488 00:14:46.799 }, 00:14:46.799 { 00:14:46.799 "name": "BaseBdev3", 00:14:46.799 "uuid": "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2", 00:14:46.799 "is_configured": true, 00:14:46.799 "data_offset": 2048, 00:14:46.799 "data_size": 63488 00:14:46.799 } 00:14:46.799 ] 00:14:46.799 }' 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.799 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4188e473-5e64-4e1f-84e1-cc0d5f59f88e 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.420 [2024-11-27 04:36:34.964843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:47.420 [2024-11-27 04:36:34.965116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:47.420 [2024-11-27 04:36:34.965135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:47.420 NewBaseBdev 00:14:47.420 [2024-11-27 04:36:34.965439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:47.420 [2024-11-27 04:36:34.965625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:47.420 [2024-11-27 04:36:34.965647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:47.420 [2024-11-27 04:36:34.965847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.420 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.420 [ 00:14:47.420 { 00:14:47.420 "name": "NewBaseBdev", 00:14:47.420 "aliases": [ 00:14:47.420 "4188e473-5e64-4e1f-84e1-cc0d5f59f88e" 00:14:47.420 ], 00:14:47.420 "product_name": "Malloc disk", 00:14:47.420 "block_size": 512, 00:14:47.420 "num_blocks": 65536, 00:14:47.420 "uuid": "4188e473-5e64-4e1f-84e1-cc0d5f59f88e", 00:14:47.420 "assigned_rate_limits": { 00:14:47.420 "rw_ios_per_sec": 0, 00:14:47.420 "rw_mbytes_per_sec": 0, 00:14:47.420 "r_mbytes_per_sec": 0, 00:14:47.420 "w_mbytes_per_sec": 0 00:14:47.420 }, 00:14:47.420 "claimed": true, 00:14:47.420 "claim_type": "exclusive_write", 00:14:47.420 "zoned": false, 00:14:47.420 "supported_io_types": { 00:14:47.420 "read": true, 00:14:47.420 "write": true, 00:14:47.420 "unmap": true, 00:14:47.420 "flush": true, 00:14:47.420 "reset": true, 00:14:47.420 "nvme_admin": false, 00:14:47.420 "nvme_io": false, 00:14:47.420 "nvme_io_md": false, 00:14:47.420 "write_zeroes": true, 00:14:47.420 "zcopy": true, 00:14:47.420 "get_zone_info": false, 00:14:47.420 "zone_management": false, 00:14:47.420 "zone_append": false, 00:14:47.420 "compare": false, 00:14:47.420 "compare_and_write": false, 00:14:47.420 "abort": true, 00:14:47.420 "seek_hole": false, 00:14:47.420 "seek_data": false, 00:14:47.421 "copy": true, 00:14:47.421 "nvme_iov_md": false 00:14:47.421 }, 00:14:47.421 "memory_domains": [ 00:14:47.421 { 00:14:47.421 "dma_device_id": "system", 00:14:47.421 "dma_device_type": 1 00:14:47.421 }, 00:14:47.421 { 00:14:47.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.421 "dma_device_type": 2 00:14:47.421 } 00:14:47.421 ], 00:14:47.421 "driver_specific": {} 00:14:47.421 } 00:14:47.421 ] 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.421 04:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.421 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.421 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.421 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.421 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.421 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.678 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.678 "name": "Existed_Raid", 00:14:47.678 "uuid": "d5dfc265-c1fa-43bf-9d57-976aef7344d8", 00:14:47.678 "strip_size_kb": 0, 00:14:47.678 "state": "online", 00:14:47.678 "raid_level": "raid1", 00:14:47.678 "superblock": true, 00:14:47.678 "num_base_bdevs": 3, 00:14:47.678 "num_base_bdevs_discovered": 3, 00:14:47.678 "num_base_bdevs_operational": 3, 00:14:47.678 "base_bdevs_list": [ 00:14:47.678 { 00:14:47.678 "name": "NewBaseBdev", 00:14:47.678 "uuid": "4188e473-5e64-4e1f-84e1-cc0d5f59f88e", 00:14:47.678 "is_configured": true, 00:14:47.678 "data_offset": 2048, 00:14:47.678 "data_size": 63488 00:14:47.678 }, 00:14:47.678 { 00:14:47.678 "name": "BaseBdev2", 00:14:47.678 "uuid": "29562eac-b6c4-480b-8421-fa8496c2a3e9", 00:14:47.678 "is_configured": true, 00:14:47.678 "data_offset": 2048, 00:14:47.678 "data_size": 63488 00:14:47.678 }, 00:14:47.678 { 00:14:47.678 "name": "BaseBdev3", 00:14:47.678 "uuid": "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2", 00:14:47.678 "is_configured": true, 00:14:47.678 "data_offset": 2048, 00:14:47.678 "data_size": 63488 00:14:47.678 } 00:14:47.678 ] 00:14:47.678 }' 00:14:47.678 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.678 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.935 [2024-11-27 04:36:35.501401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.935 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.935 "name": "Existed_Raid", 00:14:47.935 "aliases": [ 00:14:47.935 "d5dfc265-c1fa-43bf-9d57-976aef7344d8" 00:14:47.935 ], 00:14:47.935 "product_name": "Raid Volume", 00:14:47.935 "block_size": 512, 00:14:47.935 "num_blocks": 63488, 00:14:47.935 "uuid": "d5dfc265-c1fa-43bf-9d57-976aef7344d8", 00:14:47.935 "assigned_rate_limits": { 00:14:47.935 "rw_ios_per_sec": 0, 00:14:47.935 "rw_mbytes_per_sec": 0, 00:14:47.935 "r_mbytes_per_sec": 0, 00:14:47.935 "w_mbytes_per_sec": 0 00:14:47.935 }, 00:14:47.935 "claimed": false, 00:14:47.935 "zoned": false, 00:14:47.935 "supported_io_types": { 00:14:47.935 "read": true, 00:14:47.935 "write": true, 00:14:47.936 "unmap": false, 00:14:47.936 "flush": false, 00:14:47.936 "reset": true, 00:14:47.936 "nvme_admin": false, 00:14:47.936 "nvme_io": false, 00:14:47.936 "nvme_io_md": false, 00:14:47.936 "write_zeroes": true, 00:14:47.936 "zcopy": false, 00:14:47.936 "get_zone_info": false, 00:14:47.936 "zone_management": false, 00:14:47.936 "zone_append": false, 00:14:47.936 "compare": false, 00:14:47.936 "compare_and_write": false, 00:14:47.936 "abort": false, 00:14:47.936 "seek_hole": false, 00:14:47.936 "seek_data": false, 00:14:47.936 "copy": false, 00:14:47.936 "nvme_iov_md": false 00:14:47.936 }, 00:14:47.936 "memory_domains": [ 00:14:47.936 { 00:14:47.936 "dma_device_id": "system", 00:14:47.936 "dma_device_type": 1 00:14:47.936 }, 00:14:47.936 { 00:14:47.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.936 "dma_device_type": 2 00:14:47.936 }, 00:14:47.936 { 00:14:47.936 "dma_device_id": "system", 00:14:47.936 "dma_device_type": 1 00:14:47.936 }, 00:14:47.936 { 00:14:47.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.936 "dma_device_type": 2 00:14:47.936 }, 00:14:47.936 { 00:14:47.936 "dma_device_id": "system", 00:14:47.936 "dma_device_type": 1 00:14:47.936 }, 00:14:47.936 { 00:14:47.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.936 "dma_device_type": 2 00:14:47.936 } 00:14:47.936 ], 00:14:47.936 "driver_specific": { 00:14:47.936 "raid": { 00:14:47.936 "uuid": "d5dfc265-c1fa-43bf-9d57-976aef7344d8", 00:14:47.936 "strip_size_kb": 0, 00:14:47.936 "state": "online", 00:14:47.936 "raid_level": "raid1", 00:14:47.936 "superblock": true, 00:14:47.936 "num_base_bdevs": 3, 00:14:47.936 "num_base_bdevs_discovered": 3, 00:14:47.936 "num_base_bdevs_operational": 3, 00:14:47.936 "base_bdevs_list": [ 00:14:47.936 { 00:14:47.936 "name": "NewBaseBdev", 00:14:47.936 "uuid": "4188e473-5e64-4e1f-84e1-cc0d5f59f88e", 00:14:47.936 "is_configured": true, 00:14:47.936 "data_offset": 2048, 00:14:47.936 "data_size": 63488 00:14:47.936 }, 00:14:47.936 { 00:14:47.936 "name": "BaseBdev2", 00:14:47.936 "uuid": "29562eac-b6c4-480b-8421-fa8496c2a3e9", 00:14:47.936 "is_configured": true, 00:14:47.936 "data_offset": 2048, 00:14:47.936 "data_size": 63488 00:14:47.936 }, 00:14:47.936 { 00:14:47.936 "name": "BaseBdev3", 00:14:47.936 "uuid": "bf1106f0-e0a8-4ebf-bc92-f7dced66dbb2", 00:14:47.936 "is_configured": true, 00:14:47.936 "data_offset": 2048, 00:14:47.936 "data_size": 63488 00:14:47.936 } 00:14:47.936 ] 00:14:47.936 } 00:14:47.936 } 00:14:47.936 }' 00:14:47.936 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:48.193 BaseBdev2 00:14:48.193 BaseBdev3' 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:48.193 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.194 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.194 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.194 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.451 [2024-11-27 04:36:35.825088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.451 [2024-11-27 04:36:35.825129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.451 [2024-11-27 04:36:35.825221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.451 [2024-11-27 04:36:35.825584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.451 [2024-11-27 04:36:35.825602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68205 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68205 ']' 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68205 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68205 00:14:48.451 killing process with pid 68205 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68205' 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68205 00:14:48.451 [2024-11-27 04:36:35.860071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.451 04:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68205 00:14:48.708 [2024-11-27 04:36:36.127516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.641 04:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:49.641 00:14:49.641 real 0m11.517s 00:14:49.641 user 0m18.947s 00:14:49.641 sys 0m1.685s 00:14:49.641 04:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.641 ************************************ 00:14:49.641 END TEST raid_state_function_test_sb 00:14:49.641 ************************************ 00:14:49.641 04:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.641 04:36:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:49.641 04:36:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:49.641 04:36:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.641 04:36:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.641 ************************************ 00:14:49.641 START TEST raid_superblock_test 00:14:49.641 ************************************ 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68837 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68837 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68837 ']' 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.641 04:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.898 [2024-11-27 04:36:37.316294] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:49.898 [2024-11-27 04:36:37.316664] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68837 ] 00:14:49.898 [2024-11-27 04:36:37.490683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.157 [2024-11-27 04:36:37.621600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.415 [2024-11-27 04:36:37.823470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.415 [2024-11-27 04:36:37.823552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.031 malloc1 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.031 [2024-11-27 04:36:38.442912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:51.031 [2024-11-27 04:36:38.443126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.031 [2024-11-27 04:36:38.443169] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:51.031 [2024-11-27 04:36:38.443187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.031 [2024-11-27 04:36:38.445998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.031 [2024-11-27 04:36:38.446045] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:51.031 pt1 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.031 malloc2 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.031 [2024-11-27 04:36:38.494792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:51.031 [2024-11-27 04:36:38.494858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.031 [2024-11-27 04:36:38.494897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:51.031 [2024-11-27 04:36:38.494912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.031 [2024-11-27 04:36:38.497627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.031 [2024-11-27 04:36:38.497814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:51.031 pt2 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.031 malloc3 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.031 [2024-11-27 04:36:38.564543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:51.031 [2024-11-27 04:36:38.564619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.031 [2024-11-27 04:36:38.564653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:51.031 [2024-11-27 04:36:38.564668] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.031 [2024-11-27 04:36:38.567521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.031 [2024-11-27 04:36:38.567568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:51.031 pt3 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.031 [2024-11-27 04:36:38.576602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:51.031 [2024-11-27 04:36:38.579139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.031 [2024-11-27 04:36:38.579368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:51.031 [2024-11-27 04:36:38.579593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:51.031 [2024-11-27 04:36:38.579629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:51.031 [2024-11-27 04:36:38.579959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:51.031 [2024-11-27 04:36:38.580194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:51.031 [2024-11-27 04:36:38.580267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:51.031 [2024-11-27 04:36:38.580512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:51.031 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.032 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.294 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.294 "name": "raid_bdev1", 00:14:51.294 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:51.294 "strip_size_kb": 0, 00:14:51.294 "state": "online", 00:14:51.294 "raid_level": "raid1", 00:14:51.294 "superblock": true, 00:14:51.294 "num_base_bdevs": 3, 00:14:51.294 "num_base_bdevs_discovered": 3, 00:14:51.294 "num_base_bdevs_operational": 3, 00:14:51.294 "base_bdevs_list": [ 00:14:51.294 { 00:14:51.294 "name": "pt1", 00:14:51.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:51.294 "is_configured": true, 00:14:51.294 "data_offset": 2048, 00:14:51.294 "data_size": 63488 00:14:51.294 }, 00:14:51.294 { 00:14:51.294 "name": "pt2", 00:14:51.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.294 "is_configured": true, 00:14:51.294 "data_offset": 2048, 00:14:51.294 "data_size": 63488 00:14:51.294 }, 00:14:51.294 { 00:14:51.294 "name": "pt3", 00:14:51.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:51.294 "is_configured": true, 00:14:51.294 "data_offset": 2048, 00:14:51.294 "data_size": 63488 00:14:51.294 } 00:14:51.294 ] 00:14:51.294 }' 00:14:51.294 04:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.294 04:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.553 [2024-11-27 04:36:39.085123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:51.553 "name": "raid_bdev1", 00:14:51.553 "aliases": [ 00:14:51.553 "8d926b97-699a-45f0-bdfc-5131b047d94a" 00:14:51.553 ], 00:14:51.553 "product_name": "Raid Volume", 00:14:51.553 "block_size": 512, 00:14:51.553 "num_blocks": 63488, 00:14:51.553 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:51.553 "assigned_rate_limits": { 00:14:51.553 "rw_ios_per_sec": 0, 00:14:51.553 "rw_mbytes_per_sec": 0, 00:14:51.553 "r_mbytes_per_sec": 0, 00:14:51.553 "w_mbytes_per_sec": 0 00:14:51.553 }, 00:14:51.553 "claimed": false, 00:14:51.553 "zoned": false, 00:14:51.553 "supported_io_types": { 00:14:51.553 "read": true, 00:14:51.553 "write": true, 00:14:51.553 "unmap": false, 00:14:51.553 "flush": false, 00:14:51.553 "reset": true, 00:14:51.553 "nvme_admin": false, 00:14:51.553 "nvme_io": false, 00:14:51.553 "nvme_io_md": false, 00:14:51.553 "write_zeroes": true, 00:14:51.553 "zcopy": false, 00:14:51.553 "get_zone_info": false, 00:14:51.553 "zone_management": false, 00:14:51.553 "zone_append": false, 00:14:51.553 "compare": false, 00:14:51.553 "compare_and_write": false, 00:14:51.553 "abort": false, 00:14:51.553 "seek_hole": false, 00:14:51.553 "seek_data": false, 00:14:51.553 "copy": false, 00:14:51.553 "nvme_iov_md": false 00:14:51.553 }, 00:14:51.553 "memory_domains": [ 00:14:51.553 { 00:14:51.553 "dma_device_id": "system", 00:14:51.553 "dma_device_type": 1 00:14:51.553 }, 00:14:51.553 { 00:14:51.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.553 "dma_device_type": 2 00:14:51.553 }, 00:14:51.553 { 00:14:51.553 "dma_device_id": "system", 00:14:51.553 "dma_device_type": 1 00:14:51.553 }, 00:14:51.553 { 00:14:51.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.553 "dma_device_type": 2 00:14:51.553 }, 00:14:51.553 { 00:14:51.553 "dma_device_id": "system", 00:14:51.553 "dma_device_type": 1 00:14:51.553 }, 00:14:51.553 { 00:14:51.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.553 "dma_device_type": 2 00:14:51.553 } 00:14:51.553 ], 00:14:51.553 "driver_specific": { 00:14:51.553 "raid": { 00:14:51.553 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:51.553 "strip_size_kb": 0, 00:14:51.553 "state": "online", 00:14:51.553 "raid_level": "raid1", 00:14:51.553 "superblock": true, 00:14:51.553 "num_base_bdevs": 3, 00:14:51.553 "num_base_bdevs_discovered": 3, 00:14:51.553 "num_base_bdevs_operational": 3, 00:14:51.553 "base_bdevs_list": [ 00:14:51.553 { 00:14:51.553 "name": "pt1", 00:14:51.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:51.553 "is_configured": true, 00:14:51.553 "data_offset": 2048, 00:14:51.553 "data_size": 63488 00:14:51.553 }, 00:14:51.553 { 00:14:51.553 "name": "pt2", 00:14:51.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.553 "is_configured": true, 00:14:51.553 "data_offset": 2048, 00:14:51.553 "data_size": 63488 00:14:51.553 }, 00:14:51.553 { 00:14:51.553 "name": "pt3", 00:14:51.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:51.553 "is_configured": true, 00:14:51.553 "data_offset": 2048, 00:14:51.553 "data_size": 63488 00:14:51.553 } 00:14:51.553 ] 00:14:51.553 } 00:14:51.553 } 00:14:51.553 }' 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:51.553 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:51.553 pt2 00:14:51.553 pt3' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.810 [2024-11-27 04:36:39.377161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8d926b97-699a-45f0-bdfc-5131b047d94a 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8d926b97-699a-45f0-bdfc-5131b047d94a ']' 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.810 [2024-11-27 04:36:39.420808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.810 [2024-11-27 04:36:39.420843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.810 [2024-11-27 04:36:39.420943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.810 [2024-11-27 04:36:39.421045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.810 [2024-11-27 04:36:39.421069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.810 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.068 [2024-11-27 04:36:39.572923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:52.068 [2024-11-27 04:36:39.575383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:52.068 [2024-11-27 04:36:39.575467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:52.068 [2024-11-27 04:36:39.575546] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:52.068 [2024-11-27 04:36:39.575626] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:52.068 [2024-11-27 04:36:39.575661] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:52.068 [2024-11-27 04:36:39.575690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.068 [2024-11-27 04:36:39.575705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:52.068 request: 00:14:52.068 { 00:14:52.068 "name": "raid_bdev1", 00:14:52.068 "raid_level": "raid1", 00:14:52.068 "base_bdevs": [ 00:14:52.068 "malloc1", 00:14:52.068 "malloc2", 00:14:52.068 "malloc3" 00:14:52.068 ], 00:14:52.068 "superblock": false, 00:14:52.068 "method": "bdev_raid_create", 00:14:52.068 "req_id": 1 00:14:52.068 } 00:14:52.068 Got JSON-RPC error response 00:14:52.068 response: 00:14:52.068 { 00:14:52.068 "code": -17, 00:14:52.068 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:52.068 } 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.068 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.069 [2024-11-27 04:36:39.632831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:52.069 [2024-11-27 04:36:39.633002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.069 [2024-11-27 04:36:39.633129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:52.069 [2024-11-27 04:36:39.633241] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.069 [2024-11-27 04:36:39.636151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.069 [2024-11-27 04:36:39.636301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:52.069 [2024-11-27 04:36:39.636486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:52.069 [2024-11-27 04:36:39.636661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:52.069 pt1 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.069 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.326 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.326 "name": "raid_bdev1", 00:14:52.326 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:52.326 "strip_size_kb": 0, 00:14:52.326 "state": "configuring", 00:14:52.326 "raid_level": "raid1", 00:14:52.326 "superblock": true, 00:14:52.326 "num_base_bdevs": 3, 00:14:52.326 "num_base_bdevs_discovered": 1, 00:14:52.326 "num_base_bdevs_operational": 3, 00:14:52.326 "base_bdevs_list": [ 00:14:52.326 { 00:14:52.326 "name": "pt1", 00:14:52.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.326 "is_configured": true, 00:14:52.326 "data_offset": 2048, 00:14:52.326 "data_size": 63488 00:14:52.326 }, 00:14:52.326 { 00:14:52.326 "name": null, 00:14:52.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.326 "is_configured": false, 00:14:52.326 "data_offset": 2048, 00:14:52.326 "data_size": 63488 00:14:52.326 }, 00:14:52.326 { 00:14:52.326 "name": null, 00:14:52.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.326 "is_configured": false, 00:14:52.326 "data_offset": 2048, 00:14:52.326 "data_size": 63488 00:14:52.326 } 00:14:52.326 ] 00:14:52.326 }' 00:14:52.326 04:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.326 04:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.585 [2024-11-27 04:36:40.161174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:52.585 [2024-11-27 04:36:40.161252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.585 [2024-11-27 04:36:40.161288] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:52.585 [2024-11-27 04:36:40.161303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.585 [2024-11-27 04:36:40.161909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.585 [2024-11-27 04:36:40.161941] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:52.585 [2024-11-27 04:36:40.162055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:52.585 [2024-11-27 04:36:40.162088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:52.585 pt2 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.585 [2024-11-27 04:36:40.169171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.585 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.843 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.844 "name": "raid_bdev1", 00:14:52.844 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:52.844 "strip_size_kb": 0, 00:14:52.844 "state": "configuring", 00:14:52.844 "raid_level": "raid1", 00:14:52.844 "superblock": true, 00:14:52.844 "num_base_bdevs": 3, 00:14:52.844 "num_base_bdevs_discovered": 1, 00:14:52.844 "num_base_bdevs_operational": 3, 00:14:52.844 "base_bdevs_list": [ 00:14:52.844 { 00:14:52.844 "name": "pt1", 00:14:52.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.844 "is_configured": true, 00:14:52.844 "data_offset": 2048, 00:14:52.844 "data_size": 63488 00:14:52.844 }, 00:14:52.844 { 00:14:52.844 "name": null, 00:14:52.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.844 "is_configured": false, 00:14:52.844 "data_offset": 0, 00:14:52.844 "data_size": 63488 00:14:52.844 }, 00:14:52.844 { 00:14:52.844 "name": null, 00:14:52.844 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.844 "is_configured": false, 00:14:52.844 "data_offset": 2048, 00:14:52.844 "data_size": 63488 00:14:52.844 } 00:14:52.844 ] 00:14:52.844 }' 00:14:52.844 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.844 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.102 [2024-11-27 04:36:40.653281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:53.102 [2024-11-27 04:36:40.653378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.102 [2024-11-27 04:36:40.653411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:53.102 [2024-11-27 04:36:40.653429] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.102 [2024-11-27 04:36:40.654061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.102 [2024-11-27 04:36:40.654098] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:53.102 [2024-11-27 04:36:40.654202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:53.102 [2024-11-27 04:36:40.654251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:53.102 pt2 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.102 [2024-11-27 04:36:40.661265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:53.102 [2024-11-27 04:36:40.661328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.102 [2024-11-27 04:36:40.661352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:53.102 [2024-11-27 04:36:40.661368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.102 [2024-11-27 04:36:40.661922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.102 [2024-11-27 04:36:40.661962] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:53.102 [2024-11-27 04:36:40.662055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:53.102 [2024-11-27 04:36:40.662094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:53.102 [2024-11-27 04:36:40.662259] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:53.102 [2024-11-27 04:36:40.662282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:53.102 [2024-11-27 04:36:40.662594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:53.102 [2024-11-27 04:36:40.662814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:53.102 [2024-11-27 04:36:40.662830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:53.102 [2024-11-27 04:36:40.663005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.102 pt3 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.102 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.103 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.103 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.103 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.103 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.103 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.103 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.103 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.103 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.103 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.103 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.103 "name": "raid_bdev1", 00:14:53.103 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:53.103 "strip_size_kb": 0, 00:14:53.103 "state": "online", 00:14:53.103 "raid_level": "raid1", 00:14:53.103 "superblock": true, 00:14:53.103 "num_base_bdevs": 3, 00:14:53.103 "num_base_bdevs_discovered": 3, 00:14:53.103 "num_base_bdevs_operational": 3, 00:14:53.103 "base_bdevs_list": [ 00:14:53.103 { 00:14:53.103 "name": "pt1", 00:14:53.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.103 "is_configured": true, 00:14:53.103 "data_offset": 2048, 00:14:53.103 "data_size": 63488 00:14:53.103 }, 00:14:53.103 { 00:14:53.103 "name": "pt2", 00:14:53.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.103 "is_configured": true, 00:14:53.103 "data_offset": 2048, 00:14:53.103 "data_size": 63488 00:14:53.103 }, 00:14:53.103 { 00:14:53.103 "name": "pt3", 00:14:53.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:53.103 "is_configured": true, 00:14:53.103 "data_offset": 2048, 00:14:53.103 "data_size": 63488 00:14:53.103 } 00:14:53.103 ] 00:14:53.103 }' 00:14:53.361 04:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.361 04:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.620 [2024-11-27 04:36:41.173806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.620 "name": "raid_bdev1", 00:14:53.620 "aliases": [ 00:14:53.620 "8d926b97-699a-45f0-bdfc-5131b047d94a" 00:14:53.620 ], 00:14:53.620 "product_name": "Raid Volume", 00:14:53.620 "block_size": 512, 00:14:53.620 "num_blocks": 63488, 00:14:53.620 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:53.620 "assigned_rate_limits": { 00:14:53.620 "rw_ios_per_sec": 0, 00:14:53.620 "rw_mbytes_per_sec": 0, 00:14:53.620 "r_mbytes_per_sec": 0, 00:14:53.620 "w_mbytes_per_sec": 0 00:14:53.620 }, 00:14:53.620 "claimed": false, 00:14:53.620 "zoned": false, 00:14:53.620 "supported_io_types": { 00:14:53.620 "read": true, 00:14:53.620 "write": true, 00:14:53.620 "unmap": false, 00:14:53.620 "flush": false, 00:14:53.620 "reset": true, 00:14:53.620 "nvme_admin": false, 00:14:53.620 "nvme_io": false, 00:14:53.620 "nvme_io_md": false, 00:14:53.620 "write_zeroes": true, 00:14:53.620 "zcopy": false, 00:14:53.620 "get_zone_info": false, 00:14:53.620 "zone_management": false, 00:14:53.620 "zone_append": false, 00:14:53.620 "compare": false, 00:14:53.620 "compare_and_write": false, 00:14:53.620 "abort": false, 00:14:53.620 "seek_hole": false, 00:14:53.620 "seek_data": false, 00:14:53.620 "copy": false, 00:14:53.620 "nvme_iov_md": false 00:14:53.620 }, 00:14:53.620 "memory_domains": [ 00:14:53.620 { 00:14:53.620 "dma_device_id": "system", 00:14:53.620 "dma_device_type": 1 00:14:53.620 }, 00:14:53.620 { 00:14:53.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.620 "dma_device_type": 2 00:14:53.620 }, 00:14:53.620 { 00:14:53.620 "dma_device_id": "system", 00:14:53.620 "dma_device_type": 1 00:14:53.620 }, 00:14:53.620 { 00:14:53.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.620 "dma_device_type": 2 00:14:53.620 }, 00:14:53.620 { 00:14:53.620 "dma_device_id": "system", 00:14:53.620 "dma_device_type": 1 00:14:53.620 }, 00:14:53.620 { 00:14:53.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.620 "dma_device_type": 2 00:14:53.620 } 00:14:53.620 ], 00:14:53.620 "driver_specific": { 00:14:53.620 "raid": { 00:14:53.620 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:53.620 "strip_size_kb": 0, 00:14:53.620 "state": "online", 00:14:53.620 "raid_level": "raid1", 00:14:53.620 "superblock": true, 00:14:53.620 "num_base_bdevs": 3, 00:14:53.620 "num_base_bdevs_discovered": 3, 00:14:53.620 "num_base_bdevs_operational": 3, 00:14:53.620 "base_bdevs_list": [ 00:14:53.620 { 00:14:53.620 "name": "pt1", 00:14:53.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.620 "is_configured": true, 00:14:53.620 "data_offset": 2048, 00:14:53.620 "data_size": 63488 00:14:53.620 }, 00:14:53.620 { 00:14:53.620 "name": "pt2", 00:14:53.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.620 "is_configured": true, 00:14:53.620 "data_offset": 2048, 00:14:53.620 "data_size": 63488 00:14:53.620 }, 00:14:53.620 { 00:14:53.620 "name": "pt3", 00:14:53.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:53.620 "is_configured": true, 00:14:53.620 "data_offset": 2048, 00:14:53.620 "data_size": 63488 00:14:53.620 } 00:14:53.620 ] 00:14:53.620 } 00:14:53.620 } 00:14:53.620 }' 00:14:53.620 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:53.878 pt2 00:14:53.878 pt3' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:53.878 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.878 [2024-11-27 04:36:41.477816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8d926b97-699a-45f0-bdfc-5131b047d94a '!=' 8d926b97-699a-45f0-bdfc-5131b047d94a ']' 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.135 [2024-11-27 04:36:41.529500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.135 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.136 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.136 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.136 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.136 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.136 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.136 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.136 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.136 "name": "raid_bdev1", 00:14:54.136 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:54.136 "strip_size_kb": 0, 00:14:54.136 "state": "online", 00:14:54.136 "raid_level": "raid1", 00:14:54.136 "superblock": true, 00:14:54.136 "num_base_bdevs": 3, 00:14:54.136 "num_base_bdevs_discovered": 2, 00:14:54.136 "num_base_bdevs_operational": 2, 00:14:54.136 "base_bdevs_list": [ 00:14:54.136 { 00:14:54.136 "name": null, 00:14:54.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.136 "is_configured": false, 00:14:54.136 "data_offset": 0, 00:14:54.136 "data_size": 63488 00:14:54.136 }, 00:14:54.136 { 00:14:54.136 "name": "pt2", 00:14:54.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.136 "is_configured": true, 00:14:54.136 "data_offset": 2048, 00:14:54.136 "data_size": 63488 00:14:54.136 }, 00:14:54.136 { 00:14:54.136 "name": "pt3", 00:14:54.136 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.136 "is_configured": true, 00:14:54.136 "data_offset": 2048, 00:14:54.136 "data_size": 63488 00:14:54.136 } 00:14:54.136 ] 00:14:54.136 }' 00:14:54.136 04:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.136 04:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.701 [2024-11-27 04:36:42.049615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.701 [2024-11-27 04:36:42.049651] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.701 [2024-11-27 04:36:42.049752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.701 [2024-11-27 04:36:42.049861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.701 [2024-11-27 04:36:42.049889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.701 [2024-11-27 04:36:42.137602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.701 [2024-11-27 04:36:42.137680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.701 [2024-11-27 04:36:42.137719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:54.701 [2024-11-27 04:36:42.137745] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.701 [2024-11-27 04:36:42.140911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.701 [2024-11-27 04:36:42.140967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.701 [2024-11-27 04:36:42.141107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:54.701 [2024-11-27 04:36:42.141211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.701 pt2 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.701 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.702 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.702 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.702 "name": "raid_bdev1", 00:14:54.702 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:54.702 "strip_size_kb": 0, 00:14:54.702 "state": "configuring", 00:14:54.702 "raid_level": "raid1", 00:14:54.702 "superblock": true, 00:14:54.702 "num_base_bdevs": 3, 00:14:54.702 "num_base_bdevs_discovered": 1, 00:14:54.702 "num_base_bdevs_operational": 2, 00:14:54.702 "base_bdevs_list": [ 00:14:54.702 { 00:14:54.702 "name": null, 00:14:54.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.702 "is_configured": false, 00:14:54.702 "data_offset": 2048, 00:14:54.702 "data_size": 63488 00:14:54.702 }, 00:14:54.702 { 00:14:54.702 "name": "pt2", 00:14:54.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.702 "is_configured": true, 00:14:54.702 "data_offset": 2048, 00:14:54.702 "data_size": 63488 00:14:54.702 }, 00:14:54.702 { 00:14:54.702 "name": null, 00:14:54.702 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.702 "is_configured": false, 00:14:54.702 "data_offset": 2048, 00:14:54.702 "data_size": 63488 00:14:54.702 } 00:14:54.702 ] 00:14:54.702 }' 00:14:54.702 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.702 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.268 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:55.268 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:55.268 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:55.268 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:55.268 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.268 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.268 [2024-11-27 04:36:42.689803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:55.268 [2024-11-27 04:36:42.689896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.268 [2024-11-27 04:36:42.689940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:55.268 [2024-11-27 04:36:42.689958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.268 [2024-11-27 04:36:42.690545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.268 [2024-11-27 04:36:42.690583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:55.268 [2024-11-27 04:36:42.690696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:55.268 [2024-11-27 04:36:42.690739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:55.268 [2024-11-27 04:36:42.690906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:55.268 [2024-11-27 04:36:42.690928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:55.268 [2024-11-27 04:36:42.691266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:55.268 [2024-11-27 04:36:42.691467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:55.268 [2024-11-27 04:36:42.691483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:55.268 [2024-11-27 04:36:42.691654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.268 pt3 00:14:55.268 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.268 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.269 "name": "raid_bdev1", 00:14:55.269 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:55.269 "strip_size_kb": 0, 00:14:55.269 "state": "online", 00:14:55.269 "raid_level": "raid1", 00:14:55.269 "superblock": true, 00:14:55.269 "num_base_bdevs": 3, 00:14:55.269 "num_base_bdevs_discovered": 2, 00:14:55.269 "num_base_bdevs_operational": 2, 00:14:55.269 "base_bdevs_list": [ 00:14:55.269 { 00:14:55.269 "name": null, 00:14:55.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.269 "is_configured": false, 00:14:55.269 "data_offset": 2048, 00:14:55.269 "data_size": 63488 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "name": "pt2", 00:14:55.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.269 "is_configured": true, 00:14:55.269 "data_offset": 2048, 00:14:55.269 "data_size": 63488 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "name": "pt3", 00:14:55.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.269 "is_configured": true, 00:14:55.269 "data_offset": 2048, 00:14:55.269 "data_size": 63488 00:14:55.269 } 00:14:55.269 ] 00:14:55.269 }' 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.269 04:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.834 [2024-11-27 04:36:43.153892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.834 [2024-11-27 04:36:43.153932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.834 [2024-11-27 04:36:43.154033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.834 [2024-11-27 04:36:43.154123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.834 [2024-11-27 04:36:43.154140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.834 [2024-11-27 04:36:43.221907] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:55.834 [2024-11-27 04:36:43.222119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.834 [2024-11-27 04:36:43.222158] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:55.834 [2024-11-27 04:36:43.222174] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.834 [2024-11-27 04:36:43.225069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.834 [2024-11-27 04:36:43.225114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:55.834 [2024-11-27 04:36:43.225217] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:55.834 [2024-11-27 04:36:43.225280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:55.834 [2024-11-27 04:36:43.225456] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:55.834 [2024-11-27 04:36:43.225474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.834 [2024-11-27 04:36:43.225497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:55.834 [2024-11-27 04:36:43.225567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:55.834 pt1 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.834 "name": "raid_bdev1", 00:14:55.834 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:55.834 "strip_size_kb": 0, 00:14:55.834 "state": "configuring", 00:14:55.834 "raid_level": "raid1", 00:14:55.834 "superblock": true, 00:14:55.834 "num_base_bdevs": 3, 00:14:55.834 "num_base_bdevs_discovered": 1, 00:14:55.834 "num_base_bdevs_operational": 2, 00:14:55.834 "base_bdevs_list": [ 00:14:55.834 { 00:14:55.834 "name": null, 00:14:55.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.834 "is_configured": false, 00:14:55.834 "data_offset": 2048, 00:14:55.834 "data_size": 63488 00:14:55.834 }, 00:14:55.834 { 00:14:55.834 "name": "pt2", 00:14:55.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.834 "is_configured": true, 00:14:55.834 "data_offset": 2048, 00:14:55.834 "data_size": 63488 00:14:55.834 }, 00:14:55.834 { 00:14:55.834 "name": null, 00:14:55.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.834 "is_configured": false, 00:14:55.834 "data_offset": 2048, 00:14:55.834 "data_size": 63488 00:14:55.834 } 00:14:55.834 ] 00:14:55.834 }' 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.834 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.399 [2024-11-27 04:36:43.774343] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:56.399 [2024-11-27 04:36:43.774542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.399 [2024-11-27 04:36:43.774622] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:56.399 [2024-11-27 04:36:43.774653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.399 [2024-11-27 04:36:43.775717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.399 [2024-11-27 04:36:43.775802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:56.399 [2024-11-27 04:36:43.776028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:56.399 [2024-11-27 04:36:43.776089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:56.399 [2024-11-27 04:36:43.776366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:56.399 [2024-11-27 04:36:43.776394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:56.399 [2024-11-27 04:36:43.776892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:56.399 [2024-11-27 04:36:43.777238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:56.399 [2024-11-27 04:36:43.777279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:56.399 [2024-11-27 04:36:43.777575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.399 pt3 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.399 "name": "raid_bdev1", 00:14:56.399 "uuid": "8d926b97-699a-45f0-bdfc-5131b047d94a", 00:14:56.399 "strip_size_kb": 0, 00:14:56.399 "state": "online", 00:14:56.399 "raid_level": "raid1", 00:14:56.399 "superblock": true, 00:14:56.399 "num_base_bdevs": 3, 00:14:56.399 "num_base_bdevs_discovered": 2, 00:14:56.399 "num_base_bdevs_operational": 2, 00:14:56.399 "base_bdevs_list": [ 00:14:56.399 { 00:14:56.399 "name": null, 00:14:56.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.399 "is_configured": false, 00:14:56.399 "data_offset": 2048, 00:14:56.399 "data_size": 63488 00:14:56.399 }, 00:14:56.399 { 00:14:56.399 "name": "pt2", 00:14:56.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.399 "is_configured": true, 00:14:56.399 "data_offset": 2048, 00:14:56.399 "data_size": 63488 00:14:56.399 }, 00:14:56.399 { 00:14:56.399 "name": "pt3", 00:14:56.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.399 "is_configured": true, 00:14:56.399 "data_offset": 2048, 00:14:56.399 "data_size": 63488 00:14:56.399 } 00:14:56.399 ] 00:14:56.399 }' 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.399 04:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.657 04:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:56.657 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.657 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.657 04:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:56.657 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.915 [2024-11-27 04:36:44.294723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8d926b97-699a-45f0-bdfc-5131b047d94a '!=' 8d926b97-699a-45f0-bdfc-5131b047d94a ']' 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68837 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68837 ']' 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68837 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68837 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:56.915 killing process with pid 68837 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68837' 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68837 00:14:56.915 [2024-11-27 04:36:44.364627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.915 04:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68837 00:14:56.915 [2024-11-27 04:36:44.364836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.915 [2024-11-27 04:36:44.364939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.915 [2024-11-27 04:36:44.364963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:57.173 [2024-11-27 04:36:44.658615] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.574 04:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:58.574 00:14:58.574 real 0m8.548s 00:14:58.574 user 0m13.928s 00:14:58.574 sys 0m1.146s 00:14:58.574 04:36:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.574 ************************************ 00:14:58.574 END TEST raid_superblock_test 00:14:58.574 ************************************ 00:14:58.575 04:36:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.575 04:36:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:58.575 04:36:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:58.575 04:36:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.575 04:36:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.575 ************************************ 00:14:58.575 START TEST raid_read_error_test 00:14:58.575 ************************************ 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.isZSd3i6hE 00:14:58.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69294 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69294 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69294 ']' 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.575 04:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.575 [2024-11-27 04:36:45.954446] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:58.575 [2024-11-27 04:36:45.954677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69294 ] 00:14:58.575 [2024-11-27 04:36:46.146669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.833 [2024-11-27 04:36:46.299586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.091 [2024-11-27 04:36:46.500803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.091 [2024-11-27 04:36:46.500855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.348 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.348 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:59.348 04:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:59.348 04:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:59.348 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.348 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.348 BaseBdev1_malloc 00:14:59.348 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.348 04:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:59.348 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.348 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.348 true 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.349 [2024-11-27 04:36:46.902182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:59.349 [2024-11-27 04:36:46.902250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.349 [2024-11-27 04:36:46.902281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:59.349 [2024-11-27 04:36:46.902299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.349 [2024-11-27 04:36:46.905043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.349 [2024-11-27 04:36:46.905226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:59.349 BaseBdev1 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.349 BaseBdev2_malloc 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.349 true 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.349 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.349 [2024-11-27 04:36:46.966088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:59.349 [2024-11-27 04:36:46.966159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.349 [2024-11-27 04:36:46.966187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:59.349 [2024-11-27 04:36:46.966205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.349 [2024-11-27 04:36:46.968965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.349 [2024-11-27 04:36:46.969014] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:59.608 BaseBdev2 00:14:59.608 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.608 04:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:59.608 04:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:59.608 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.608 04:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.608 BaseBdev3_malloc 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.608 true 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.608 [2024-11-27 04:36:47.033399] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:59.608 [2024-11-27 04:36:47.033588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.608 [2024-11-27 04:36:47.033625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:59.608 [2024-11-27 04:36:47.033644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.608 [2024-11-27 04:36:47.036441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.608 [2024-11-27 04:36:47.036601] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:59.608 BaseBdev3 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.608 [2024-11-27 04:36:47.041549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.608 [2024-11-27 04:36:47.044105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.608 [2024-11-27 04:36:47.044328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.608 [2024-11-27 04:36:47.044622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:59.608 [2024-11-27 04:36:47.044642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.608 [2024-11-27 04:36:47.044984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:59.608 [2024-11-27 04:36:47.045225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:59.608 [2024-11-27 04:36:47.045244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:59.608 [2024-11-27 04:36:47.045485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.608 "name": "raid_bdev1", 00:14:59.608 "uuid": "1626c869-48ee-44b4-a3ea-353c238195fa", 00:14:59.608 "strip_size_kb": 0, 00:14:59.608 "state": "online", 00:14:59.608 "raid_level": "raid1", 00:14:59.608 "superblock": true, 00:14:59.608 "num_base_bdevs": 3, 00:14:59.608 "num_base_bdevs_discovered": 3, 00:14:59.608 "num_base_bdevs_operational": 3, 00:14:59.608 "base_bdevs_list": [ 00:14:59.608 { 00:14:59.608 "name": "BaseBdev1", 00:14:59.608 "uuid": "15ce51c8-a343-56f8-9bde-ff2e29ccbec6", 00:14:59.608 "is_configured": true, 00:14:59.608 "data_offset": 2048, 00:14:59.608 "data_size": 63488 00:14:59.608 }, 00:14:59.608 { 00:14:59.608 "name": "BaseBdev2", 00:14:59.608 "uuid": "d91636a5-42c8-51bc-b070-eebfc76d302c", 00:14:59.608 "is_configured": true, 00:14:59.608 "data_offset": 2048, 00:14:59.608 "data_size": 63488 00:14:59.608 }, 00:14:59.608 { 00:14:59.608 "name": "BaseBdev3", 00:14:59.608 "uuid": "65040ca1-37cb-5e1c-987f-d8939606d8cf", 00:14:59.608 "is_configured": true, 00:14:59.608 "data_offset": 2048, 00:14:59.608 "data_size": 63488 00:14:59.608 } 00:14:59.608 ] 00:14:59.608 }' 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.608 04:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.173 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:00.173 04:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:00.173 [2024-11-27 04:36:47.615129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.104 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.105 04:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.105 04:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.105 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.105 "name": "raid_bdev1", 00:15:01.105 "uuid": "1626c869-48ee-44b4-a3ea-353c238195fa", 00:15:01.105 "strip_size_kb": 0, 00:15:01.105 "state": "online", 00:15:01.105 "raid_level": "raid1", 00:15:01.105 "superblock": true, 00:15:01.105 "num_base_bdevs": 3, 00:15:01.105 "num_base_bdevs_discovered": 3, 00:15:01.105 "num_base_bdevs_operational": 3, 00:15:01.105 "base_bdevs_list": [ 00:15:01.105 { 00:15:01.105 "name": "BaseBdev1", 00:15:01.105 "uuid": "15ce51c8-a343-56f8-9bde-ff2e29ccbec6", 00:15:01.105 "is_configured": true, 00:15:01.105 "data_offset": 2048, 00:15:01.105 "data_size": 63488 00:15:01.105 }, 00:15:01.105 { 00:15:01.105 "name": "BaseBdev2", 00:15:01.105 "uuid": "d91636a5-42c8-51bc-b070-eebfc76d302c", 00:15:01.105 "is_configured": true, 00:15:01.105 "data_offset": 2048, 00:15:01.105 "data_size": 63488 00:15:01.105 }, 00:15:01.105 { 00:15:01.105 "name": "BaseBdev3", 00:15:01.105 "uuid": "65040ca1-37cb-5e1c-987f-d8939606d8cf", 00:15:01.105 "is_configured": true, 00:15:01.105 "data_offset": 2048, 00:15:01.105 "data_size": 63488 00:15:01.105 } 00:15:01.105 ] 00:15:01.105 }' 00:15:01.105 04:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.105 04:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.671 [2024-11-27 04:36:49.019024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.671 [2024-11-27 04:36:49.019060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.671 [2024-11-27 04:36:49.022540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.671 { 00:15:01.671 "results": [ 00:15:01.671 { 00:15:01.671 "job": "raid_bdev1", 00:15:01.671 "core_mask": "0x1", 00:15:01.671 "workload": "randrw", 00:15:01.671 "percentage": 50, 00:15:01.671 "status": "finished", 00:15:01.671 "queue_depth": 1, 00:15:01.671 "io_size": 131072, 00:15:01.671 "runtime": 1.401379, 00:15:01.671 "iops": 9494.933205078712, 00:15:01.671 "mibps": 1186.866650634839, 00:15:01.671 "io_failed": 0, 00:15:01.671 "io_timeout": 0, 00:15:01.671 "avg_latency_us": 100.88453547955126, 00:15:01.671 "min_latency_us": 43.985454545454544, 00:15:01.671 "max_latency_us": 1824.581818181818 00:15:01.671 } 00:15:01.671 ], 00:15:01.671 "core_count": 1 00:15:01.671 } 00:15:01.671 [2024-11-27 04:36:49.022749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.671 [2024-11-27 04:36:49.022980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.671 [2024-11-27 04:36:49.023001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69294 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69294 ']' 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69294 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69294 00:15:01.671 killing process with pid 69294 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69294' 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69294 00:15:01.671 [2024-11-27 04:36:49.054103] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.671 04:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69294 00:15:01.671 [2024-11-27 04:36:49.258514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.045 04:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:03.045 04:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.isZSd3i6hE 00:15:03.045 04:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:03.045 04:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:03.045 04:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:03.045 ************************************ 00:15:03.045 END TEST raid_read_error_test 00:15:03.045 ************************************ 00:15:03.045 04:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:03.045 04:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:03.045 04:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:03.045 00:15:03.045 real 0m4.549s 00:15:03.045 user 0m5.549s 00:15:03.045 sys 0m0.546s 00:15:03.045 04:36:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.045 04:36:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.045 04:36:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:15:03.045 04:36:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:03.045 04:36:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.045 04:36:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.045 ************************************ 00:15:03.045 START TEST raid_write_error_test 00:15:03.045 ************************************ 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EUplGBXcZg 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69434 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69434 00:15:03.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69434 ']' 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.045 04:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.045 [2024-11-27 04:36:50.518971] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:15:03.045 [2024-11-27 04:36:50.519135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69434 ] 00:15:03.303 [2024-11-27 04:36:50.692103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.303 [2024-11-27 04:36:50.824060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.577 [2024-11-27 04:36:51.028833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.577 [2024-11-27 04:36:51.029087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.145 BaseBdev1_malloc 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.145 true 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.145 [2024-11-27 04:36:51.661146] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:04.145 [2024-11-27 04:36:51.661388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.145 [2024-11-27 04:36:51.661566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:04.145 [2024-11-27 04:36:51.661738] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.145 [2024-11-27 04:36:51.664916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.145 [2024-11-27 04:36:51.664968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.145 BaseBdev1 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.145 BaseBdev2_malloc 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.145 true 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.145 [2024-11-27 04:36:51.722083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:04.145 [2024-11-27 04:36:51.722306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.145 [2024-11-27 04:36:51.722456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:04.145 [2024-11-27 04:36:51.722503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.145 [2024-11-27 04:36:51.725668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.145 [2024-11-27 04:36:51.725720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:04.145 BaseBdev2 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.145 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.405 BaseBdev3_malloc 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.405 true 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.405 [2024-11-27 04:36:51.790392] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:04.405 [2024-11-27 04:36:51.790625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.405 [2024-11-27 04:36:51.790809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:04.405 [2024-11-27 04:36:51.790951] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.405 [2024-11-27 04:36:51.794154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.405 [2024-11-27 04:36:51.794211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:04.405 BaseBdev3 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.405 [2024-11-27 04:36:51.798597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.405 [2024-11-27 04:36:51.801374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.405 [2024-11-27 04:36:51.801503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.405 [2024-11-27 04:36:51.801872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:04.405 [2024-11-27 04:36:51.801897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:04.405 [2024-11-27 04:36:51.802263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:04.405 [2024-11-27 04:36:51.802544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:04.405 [2024-11-27 04:36:51.802572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:04.405 [2024-11-27 04:36:51.802861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.405 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.406 "name": "raid_bdev1", 00:15:04.406 "uuid": "1b5a6f20-e0fd-4fff-82bb-90d9ce8f2321", 00:15:04.406 "strip_size_kb": 0, 00:15:04.406 "state": "online", 00:15:04.406 "raid_level": "raid1", 00:15:04.406 "superblock": true, 00:15:04.406 "num_base_bdevs": 3, 00:15:04.406 "num_base_bdevs_discovered": 3, 00:15:04.406 "num_base_bdevs_operational": 3, 00:15:04.406 "base_bdevs_list": [ 00:15:04.406 { 00:15:04.406 "name": "BaseBdev1", 00:15:04.406 "uuid": "ceb9d0a7-d466-5673-8fa6-3d2b7d76df9b", 00:15:04.406 "is_configured": true, 00:15:04.406 "data_offset": 2048, 00:15:04.406 "data_size": 63488 00:15:04.406 }, 00:15:04.406 { 00:15:04.406 "name": "BaseBdev2", 00:15:04.406 "uuid": "0add3057-0f23-533f-8deb-7f8f32d76076", 00:15:04.406 "is_configured": true, 00:15:04.406 "data_offset": 2048, 00:15:04.406 "data_size": 63488 00:15:04.406 }, 00:15:04.406 { 00:15:04.406 "name": "BaseBdev3", 00:15:04.406 "uuid": "fd0c0987-e9be-5aaa-84ba-9eee726f4e6a", 00:15:04.406 "is_configured": true, 00:15:04.406 "data_offset": 2048, 00:15:04.406 "data_size": 63488 00:15:04.406 } 00:15:04.406 ] 00:15:04.406 }' 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.406 04:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.970 04:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:04.970 04:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:04.970 [2024-11-27 04:36:52.428378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.901 [2024-11-27 04:36:53.308555] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:05.901 [2024-11-27 04:36:53.308616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.901 [2024-11-27 04:36:53.308900] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.901 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.901 "name": "raid_bdev1", 00:15:05.901 "uuid": "1b5a6f20-e0fd-4fff-82bb-90d9ce8f2321", 00:15:05.901 "strip_size_kb": 0, 00:15:05.901 "state": "online", 00:15:05.901 "raid_level": "raid1", 00:15:05.901 "superblock": true, 00:15:05.901 "num_base_bdevs": 3, 00:15:05.901 "num_base_bdevs_discovered": 2, 00:15:05.901 "num_base_bdevs_operational": 2, 00:15:05.901 "base_bdevs_list": [ 00:15:05.901 { 00:15:05.901 "name": null, 00:15:05.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.901 "is_configured": false, 00:15:05.901 "data_offset": 0, 00:15:05.901 "data_size": 63488 00:15:05.901 }, 00:15:05.901 { 00:15:05.902 "name": "BaseBdev2", 00:15:05.902 "uuid": "0add3057-0f23-533f-8deb-7f8f32d76076", 00:15:05.902 "is_configured": true, 00:15:05.902 "data_offset": 2048, 00:15:05.902 "data_size": 63488 00:15:05.902 }, 00:15:05.902 { 00:15:05.902 "name": "BaseBdev3", 00:15:05.902 "uuid": "fd0c0987-e9be-5aaa-84ba-9eee726f4e6a", 00:15:05.902 "is_configured": true, 00:15:05.902 "data_offset": 2048, 00:15:05.902 "data_size": 63488 00:15:05.902 } 00:15:05.902 ] 00:15:05.902 }' 00:15:05.902 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.902 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.466 [2024-11-27 04:36:53.841362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.466 [2024-11-27 04:36:53.841535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.466 [2024-11-27 04:36:53.845018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.466 [2024-11-27 04:36:53.845091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.466 [2024-11-27 04:36:53.845201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.466 [2024-11-27 04:36:53.845226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.466 { 00:15:06.466 "results": [ 00:15:06.466 { 00:15:06.466 "job": "raid_bdev1", 00:15:06.466 "core_mask": "0x1", 00:15:06.466 "workload": "randrw", 00:15:06.466 "percentage": 50, 00:15:06.466 "status": "finished", 00:15:06.466 "queue_depth": 1, 00:15:06.466 "io_size": 131072, 00:15:06.466 "runtime": 1.410935, 00:15:06.466 "iops": 10714.8805579279, 00:15:06.466 "mibps": 1339.3600697409875, 00:15:06.466 "io_failed": 0, 00:15:06.466 "io_timeout": 0, 00:15:06.466 "avg_latency_us": 89.05843726322625, 00:15:06.466 "min_latency_us": 42.123636363636365, 00:15:06.466 "max_latency_us": 1809.6872727272728 00:15:06.466 } 00:15:06.466 ], 00:15:06.466 "core_count": 1 00:15:06.466 } 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69434 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69434 ']' 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69434 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69434 00:15:06.466 killing process with pid 69434 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69434' 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69434 00:15:06.466 [2024-11-27 04:36:53.874259] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.466 04:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69434 00:15:06.466 [2024-11-27 04:36:54.074333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.838 04:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:07.838 04:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EUplGBXcZg 00:15:07.838 04:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:07.838 04:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:07.838 04:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:07.838 ************************************ 00:15:07.838 END TEST raid_write_error_test 00:15:07.838 ************************************ 00:15:07.838 04:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:07.838 04:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:07.838 04:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:07.838 00:15:07.838 real 0m4.776s 00:15:07.838 user 0m6.020s 00:15:07.838 sys 0m0.535s 00:15:07.838 04:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.838 04:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.838 04:36:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:15:07.838 04:36:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:07.838 04:36:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:15:07.838 04:36:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:07.838 04:36:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.838 04:36:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.838 ************************************ 00:15:07.838 START TEST raid_state_function_test 00:15:07.838 ************************************ 00:15:07.838 04:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:15:07.838 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:07.838 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:07.838 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:07.838 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:07.838 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:07.838 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.838 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.839 Process raid pid: 69578 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69578 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69578' 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69578 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69578 ']' 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.839 04:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.839 [2024-11-27 04:36:55.356822] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:15:07.839 [2024-11-27 04:36:55.357235] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.096 [2024-11-27 04:36:55.535839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.096 [2024-11-27 04:36:55.666733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.354 [2024-11-27 04:36:55.901110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.354 [2024-11-27 04:36:55.901361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.919 [2024-11-27 04:36:56.308441] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:08.919 [2024-11-27 04:36:56.308634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:08.919 [2024-11-27 04:36:56.308664] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.919 [2024-11-27 04:36:56.308682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.919 [2024-11-27 04:36:56.308694] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:08.919 [2024-11-27 04:36:56.308709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:08.919 [2024-11-27 04:36:56.308719] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:08.919 [2024-11-27 04:36:56.308733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.919 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.919 "name": "Existed_Raid", 00:15:08.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.919 "strip_size_kb": 64, 00:15:08.919 "state": "configuring", 00:15:08.919 "raid_level": "raid0", 00:15:08.919 "superblock": false, 00:15:08.919 "num_base_bdevs": 4, 00:15:08.919 "num_base_bdevs_discovered": 0, 00:15:08.919 "num_base_bdevs_operational": 4, 00:15:08.919 "base_bdevs_list": [ 00:15:08.919 { 00:15:08.919 "name": "BaseBdev1", 00:15:08.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.919 "is_configured": false, 00:15:08.919 "data_offset": 0, 00:15:08.919 "data_size": 0 00:15:08.919 }, 00:15:08.919 { 00:15:08.920 "name": "BaseBdev2", 00:15:08.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.920 "is_configured": false, 00:15:08.920 "data_offset": 0, 00:15:08.920 "data_size": 0 00:15:08.920 }, 00:15:08.920 { 00:15:08.920 "name": "BaseBdev3", 00:15:08.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.920 "is_configured": false, 00:15:08.920 "data_offset": 0, 00:15:08.920 "data_size": 0 00:15:08.920 }, 00:15:08.920 { 00:15:08.920 "name": "BaseBdev4", 00:15:08.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.920 "is_configured": false, 00:15:08.920 "data_offset": 0, 00:15:08.920 "data_size": 0 00:15:08.920 } 00:15:08.920 ] 00:15:08.920 }' 00:15:08.920 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.920 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.488 [2024-11-27 04:36:56.816540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.488 [2024-11-27 04:36:56.816591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.488 [2024-11-27 04:36:56.824529] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.488 [2024-11-27 04:36:56.824709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.488 [2024-11-27 04:36:56.824852] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.488 [2024-11-27 04:36:56.824924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.488 [2024-11-27 04:36:56.825027] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:09.488 [2024-11-27 04:36:56.825086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:09.488 [2024-11-27 04:36:56.825256] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:09.488 [2024-11-27 04:36:56.825289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.488 [2024-11-27 04:36:56.870628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.488 BaseBdev1 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.488 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.488 [ 00:15:09.488 { 00:15:09.488 "name": "BaseBdev1", 00:15:09.488 "aliases": [ 00:15:09.488 "f395dbd1-2703-4929-bb1c-aad0195e7f7a" 00:15:09.488 ], 00:15:09.488 "product_name": "Malloc disk", 00:15:09.488 "block_size": 512, 00:15:09.488 "num_blocks": 65536, 00:15:09.488 "uuid": "f395dbd1-2703-4929-bb1c-aad0195e7f7a", 00:15:09.488 "assigned_rate_limits": { 00:15:09.488 "rw_ios_per_sec": 0, 00:15:09.488 "rw_mbytes_per_sec": 0, 00:15:09.488 "r_mbytes_per_sec": 0, 00:15:09.488 "w_mbytes_per_sec": 0 00:15:09.488 }, 00:15:09.488 "claimed": true, 00:15:09.488 "claim_type": "exclusive_write", 00:15:09.488 "zoned": false, 00:15:09.488 "supported_io_types": { 00:15:09.488 "read": true, 00:15:09.488 "write": true, 00:15:09.488 "unmap": true, 00:15:09.488 "flush": true, 00:15:09.488 "reset": true, 00:15:09.488 "nvme_admin": false, 00:15:09.488 "nvme_io": false, 00:15:09.488 "nvme_io_md": false, 00:15:09.488 "write_zeroes": true, 00:15:09.488 "zcopy": true, 00:15:09.488 "get_zone_info": false, 00:15:09.488 "zone_management": false, 00:15:09.488 "zone_append": false, 00:15:09.488 "compare": false, 00:15:09.489 "compare_and_write": false, 00:15:09.489 "abort": true, 00:15:09.489 "seek_hole": false, 00:15:09.489 "seek_data": false, 00:15:09.489 "copy": true, 00:15:09.489 "nvme_iov_md": false 00:15:09.489 }, 00:15:09.489 "memory_domains": [ 00:15:09.489 { 00:15:09.489 "dma_device_id": "system", 00:15:09.489 "dma_device_type": 1 00:15:09.489 }, 00:15:09.489 { 00:15:09.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.489 "dma_device_type": 2 00:15:09.489 } 00:15:09.489 ], 00:15:09.489 "driver_specific": {} 00:15:09.489 } 00:15:09.489 ] 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.489 "name": "Existed_Raid", 00:15:09.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.489 "strip_size_kb": 64, 00:15:09.489 "state": "configuring", 00:15:09.489 "raid_level": "raid0", 00:15:09.489 "superblock": false, 00:15:09.489 "num_base_bdevs": 4, 00:15:09.489 "num_base_bdevs_discovered": 1, 00:15:09.489 "num_base_bdevs_operational": 4, 00:15:09.489 "base_bdevs_list": [ 00:15:09.489 { 00:15:09.489 "name": "BaseBdev1", 00:15:09.489 "uuid": "f395dbd1-2703-4929-bb1c-aad0195e7f7a", 00:15:09.489 "is_configured": true, 00:15:09.489 "data_offset": 0, 00:15:09.489 "data_size": 65536 00:15:09.489 }, 00:15:09.489 { 00:15:09.489 "name": "BaseBdev2", 00:15:09.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.489 "is_configured": false, 00:15:09.489 "data_offset": 0, 00:15:09.489 "data_size": 0 00:15:09.489 }, 00:15:09.489 { 00:15:09.489 "name": "BaseBdev3", 00:15:09.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.489 "is_configured": false, 00:15:09.489 "data_offset": 0, 00:15:09.489 "data_size": 0 00:15:09.489 }, 00:15:09.489 { 00:15:09.489 "name": "BaseBdev4", 00:15:09.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.489 "is_configured": false, 00:15:09.489 "data_offset": 0, 00:15:09.489 "data_size": 0 00:15:09.489 } 00:15:09.489 ] 00:15:09.489 }' 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.489 04:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.746 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:09.746 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.746 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.004 [2024-11-27 04:36:57.370833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.004 [2024-11-27 04:36:57.370898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.004 [2024-11-27 04:36:57.378861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.004 [2024-11-27 04:36:57.381389] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.004 [2024-11-27 04:36:57.381566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.004 [2024-11-27 04:36:57.381684] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:10.004 [2024-11-27 04:36:57.381746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:10.004 [2024-11-27 04:36:57.381973] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:10.004 [2024-11-27 04:36:57.382007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.004 "name": "Existed_Raid", 00:15:10.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.004 "strip_size_kb": 64, 00:15:10.004 "state": "configuring", 00:15:10.004 "raid_level": "raid0", 00:15:10.004 "superblock": false, 00:15:10.004 "num_base_bdevs": 4, 00:15:10.004 "num_base_bdevs_discovered": 1, 00:15:10.004 "num_base_bdevs_operational": 4, 00:15:10.004 "base_bdevs_list": [ 00:15:10.004 { 00:15:10.004 "name": "BaseBdev1", 00:15:10.004 "uuid": "f395dbd1-2703-4929-bb1c-aad0195e7f7a", 00:15:10.004 "is_configured": true, 00:15:10.004 "data_offset": 0, 00:15:10.004 "data_size": 65536 00:15:10.004 }, 00:15:10.004 { 00:15:10.004 "name": "BaseBdev2", 00:15:10.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.004 "is_configured": false, 00:15:10.004 "data_offset": 0, 00:15:10.004 "data_size": 0 00:15:10.004 }, 00:15:10.004 { 00:15:10.004 "name": "BaseBdev3", 00:15:10.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.004 "is_configured": false, 00:15:10.004 "data_offset": 0, 00:15:10.004 "data_size": 0 00:15:10.004 }, 00:15:10.004 { 00:15:10.004 "name": "BaseBdev4", 00:15:10.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.004 "is_configured": false, 00:15:10.004 "data_offset": 0, 00:15:10.004 "data_size": 0 00:15:10.004 } 00:15:10.004 ] 00:15:10.004 }' 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.004 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.571 [2024-11-27 04:36:57.925214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.571 BaseBdev2 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.571 [ 00:15:10.571 { 00:15:10.571 "name": "BaseBdev2", 00:15:10.571 "aliases": [ 00:15:10.571 "27d6f251-abd4-41db-8891-b8ef4b8ecb92" 00:15:10.571 ], 00:15:10.571 "product_name": "Malloc disk", 00:15:10.571 "block_size": 512, 00:15:10.571 "num_blocks": 65536, 00:15:10.571 "uuid": "27d6f251-abd4-41db-8891-b8ef4b8ecb92", 00:15:10.571 "assigned_rate_limits": { 00:15:10.571 "rw_ios_per_sec": 0, 00:15:10.571 "rw_mbytes_per_sec": 0, 00:15:10.571 "r_mbytes_per_sec": 0, 00:15:10.571 "w_mbytes_per_sec": 0 00:15:10.571 }, 00:15:10.571 "claimed": true, 00:15:10.571 "claim_type": "exclusive_write", 00:15:10.571 "zoned": false, 00:15:10.571 "supported_io_types": { 00:15:10.571 "read": true, 00:15:10.571 "write": true, 00:15:10.571 "unmap": true, 00:15:10.571 "flush": true, 00:15:10.571 "reset": true, 00:15:10.571 "nvme_admin": false, 00:15:10.571 "nvme_io": false, 00:15:10.571 "nvme_io_md": false, 00:15:10.571 "write_zeroes": true, 00:15:10.571 "zcopy": true, 00:15:10.571 "get_zone_info": false, 00:15:10.571 "zone_management": false, 00:15:10.571 "zone_append": false, 00:15:10.571 "compare": false, 00:15:10.571 "compare_and_write": false, 00:15:10.571 "abort": true, 00:15:10.571 "seek_hole": false, 00:15:10.571 "seek_data": false, 00:15:10.571 "copy": true, 00:15:10.571 "nvme_iov_md": false 00:15:10.571 }, 00:15:10.571 "memory_domains": [ 00:15:10.571 { 00:15:10.571 "dma_device_id": "system", 00:15:10.571 "dma_device_type": 1 00:15:10.571 }, 00:15:10.571 { 00:15:10.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.571 "dma_device_type": 2 00:15:10.571 } 00:15:10.571 ], 00:15:10.571 "driver_specific": {} 00:15:10.571 } 00:15:10.571 ] 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.571 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.572 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.572 04:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.572 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.572 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.572 04:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.572 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.572 "name": "Existed_Raid", 00:15:10.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.572 "strip_size_kb": 64, 00:15:10.572 "state": "configuring", 00:15:10.572 "raid_level": "raid0", 00:15:10.572 "superblock": false, 00:15:10.572 "num_base_bdevs": 4, 00:15:10.572 "num_base_bdevs_discovered": 2, 00:15:10.572 "num_base_bdevs_operational": 4, 00:15:10.572 "base_bdevs_list": [ 00:15:10.572 { 00:15:10.572 "name": "BaseBdev1", 00:15:10.572 "uuid": "f395dbd1-2703-4929-bb1c-aad0195e7f7a", 00:15:10.572 "is_configured": true, 00:15:10.572 "data_offset": 0, 00:15:10.572 "data_size": 65536 00:15:10.572 }, 00:15:10.572 { 00:15:10.572 "name": "BaseBdev2", 00:15:10.572 "uuid": "27d6f251-abd4-41db-8891-b8ef4b8ecb92", 00:15:10.572 "is_configured": true, 00:15:10.572 "data_offset": 0, 00:15:10.572 "data_size": 65536 00:15:10.572 }, 00:15:10.572 { 00:15:10.572 "name": "BaseBdev3", 00:15:10.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.572 "is_configured": false, 00:15:10.572 "data_offset": 0, 00:15:10.572 "data_size": 0 00:15:10.572 }, 00:15:10.572 { 00:15:10.572 "name": "BaseBdev4", 00:15:10.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.572 "is_configured": false, 00:15:10.572 "data_offset": 0, 00:15:10.572 "data_size": 0 00:15:10.572 } 00:15:10.572 ] 00:15:10.572 }' 00:15:10.572 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.572 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.137 BaseBdev3 00:15:11.137 [2024-11-27 04:36:58.516045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.137 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.138 [ 00:15:11.138 { 00:15:11.138 "name": "BaseBdev3", 00:15:11.138 "aliases": [ 00:15:11.138 "63928957-f92a-407f-ba64-1bec193d7386" 00:15:11.138 ], 00:15:11.138 "product_name": "Malloc disk", 00:15:11.138 "block_size": 512, 00:15:11.138 "num_blocks": 65536, 00:15:11.138 "uuid": "63928957-f92a-407f-ba64-1bec193d7386", 00:15:11.138 "assigned_rate_limits": { 00:15:11.138 "rw_ios_per_sec": 0, 00:15:11.138 "rw_mbytes_per_sec": 0, 00:15:11.138 "r_mbytes_per_sec": 0, 00:15:11.138 "w_mbytes_per_sec": 0 00:15:11.138 }, 00:15:11.138 "claimed": true, 00:15:11.138 "claim_type": "exclusive_write", 00:15:11.138 "zoned": false, 00:15:11.138 "supported_io_types": { 00:15:11.138 "read": true, 00:15:11.138 "write": true, 00:15:11.138 "unmap": true, 00:15:11.138 "flush": true, 00:15:11.138 "reset": true, 00:15:11.138 "nvme_admin": false, 00:15:11.138 "nvme_io": false, 00:15:11.138 "nvme_io_md": false, 00:15:11.138 "write_zeroes": true, 00:15:11.138 "zcopy": true, 00:15:11.138 "get_zone_info": false, 00:15:11.138 "zone_management": false, 00:15:11.138 "zone_append": false, 00:15:11.138 "compare": false, 00:15:11.138 "compare_and_write": false, 00:15:11.138 "abort": true, 00:15:11.138 "seek_hole": false, 00:15:11.138 "seek_data": false, 00:15:11.138 "copy": true, 00:15:11.138 "nvme_iov_md": false 00:15:11.138 }, 00:15:11.138 "memory_domains": [ 00:15:11.138 { 00:15:11.138 "dma_device_id": "system", 00:15:11.138 "dma_device_type": 1 00:15:11.138 }, 00:15:11.138 { 00:15:11.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.138 "dma_device_type": 2 00:15:11.138 } 00:15:11.138 ], 00:15:11.138 "driver_specific": {} 00:15:11.138 } 00:15:11.138 ] 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.138 "name": "Existed_Raid", 00:15:11.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.138 "strip_size_kb": 64, 00:15:11.138 "state": "configuring", 00:15:11.138 "raid_level": "raid0", 00:15:11.138 "superblock": false, 00:15:11.138 "num_base_bdevs": 4, 00:15:11.138 "num_base_bdevs_discovered": 3, 00:15:11.138 "num_base_bdevs_operational": 4, 00:15:11.138 "base_bdevs_list": [ 00:15:11.138 { 00:15:11.138 "name": "BaseBdev1", 00:15:11.138 "uuid": "f395dbd1-2703-4929-bb1c-aad0195e7f7a", 00:15:11.138 "is_configured": true, 00:15:11.138 "data_offset": 0, 00:15:11.138 "data_size": 65536 00:15:11.138 }, 00:15:11.138 { 00:15:11.138 "name": "BaseBdev2", 00:15:11.138 "uuid": "27d6f251-abd4-41db-8891-b8ef4b8ecb92", 00:15:11.138 "is_configured": true, 00:15:11.138 "data_offset": 0, 00:15:11.138 "data_size": 65536 00:15:11.138 }, 00:15:11.138 { 00:15:11.138 "name": "BaseBdev3", 00:15:11.138 "uuid": "63928957-f92a-407f-ba64-1bec193d7386", 00:15:11.138 "is_configured": true, 00:15:11.138 "data_offset": 0, 00:15:11.138 "data_size": 65536 00:15:11.138 }, 00:15:11.138 { 00:15:11.138 "name": "BaseBdev4", 00:15:11.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.138 "is_configured": false, 00:15:11.138 "data_offset": 0, 00:15:11.138 "data_size": 0 00:15:11.138 } 00:15:11.138 ] 00:15:11.138 }' 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.138 04:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.703 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:11.703 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.703 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.703 [2024-11-27 04:36:59.122857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:11.703 [2024-11-27 04:36:59.123112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:11.703 [2024-11-27 04:36:59.123139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:11.703 [2024-11-27 04:36:59.123496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:11.703 [2024-11-27 04:36:59.123712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:11.703 [2024-11-27 04:36:59.123736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:11.703 [2024-11-27 04:36:59.124073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.704 BaseBdev4 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.704 [ 00:15:11.704 { 00:15:11.704 "name": "BaseBdev4", 00:15:11.704 "aliases": [ 00:15:11.704 "6ed8011f-8544-4dd7-b2fa-7768a9a65dde" 00:15:11.704 ], 00:15:11.704 "product_name": "Malloc disk", 00:15:11.704 "block_size": 512, 00:15:11.704 "num_blocks": 65536, 00:15:11.704 "uuid": "6ed8011f-8544-4dd7-b2fa-7768a9a65dde", 00:15:11.704 "assigned_rate_limits": { 00:15:11.704 "rw_ios_per_sec": 0, 00:15:11.704 "rw_mbytes_per_sec": 0, 00:15:11.704 "r_mbytes_per_sec": 0, 00:15:11.704 "w_mbytes_per_sec": 0 00:15:11.704 }, 00:15:11.704 "claimed": true, 00:15:11.704 "claim_type": "exclusive_write", 00:15:11.704 "zoned": false, 00:15:11.704 "supported_io_types": { 00:15:11.704 "read": true, 00:15:11.704 "write": true, 00:15:11.704 "unmap": true, 00:15:11.704 "flush": true, 00:15:11.704 "reset": true, 00:15:11.704 "nvme_admin": false, 00:15:11.704 "nvme_io": false, 00:15:11.704 "nvme_io_md": false, 00:15:11.704 "write_zeroes": true, 00:15:11.704 "zcopy": true, 00:15:11.704 "get_zone_info": false, 00:15:11.704 "zone_management": false, 00:15:11.704 "zone_append": false, 00:15:11.704 "compare": false, 00:15:11.704 "compare_and_write": false, 00:15:11.704 "abort": true, 00:15:11.704 "seek_hole": false, 00:15:11.704 "seek_data": false, 00:15:11.704 "copy": true, 00:15:11.704 "nvme_iov_md": false 00:15:11.704 }, 00:15:11.704 "memory_domains": [ 00:15:11.704 { 00:15:11.704 "dma_device_id": "system", 00:15:11.704 "dma_device_type": 1 00:15:11.704 }, 00:15:11.704 { 00:15:11.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.704 "dma_device_type": 2 00:15:11.704 } 00:15:11.704 ], 00:15:11.704 "driver_specific": {} 00:15:11.704 } 00:15:11.704 ] 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.704 "name": "Existed_Raid", 00:15:11.704 "uuid": "dc63f84b-65d7-4311-b90e-eb0c2b8ef7ce", 00:15:11.704 "strip_size_kb": 64, 00:15:11.704 "state": "online", 00:15:11.704 "raid_level": "raid0", 00:15:11.704 "superblock": false, 00:15:11.704 "num_base_bdevs": 4, 00:15:11.704 "num_base_bdevs_discovered": 4, 00:15:11.704 "num_base_bdevs_operational": 4, 00:15:11.704 "base_bdevs_list": [ 00:15:11.704 { 00:15:11.704 "name": "BaseBdev1", 00:15:11.704 "uuid": "f395dbd1-2703-4929-bb1c-aad0195e7f7a", 00:15:11.704 "is_configured": true, 00:15:11.704 "data_offset": 0, 00:15:11.704 "data_size": 65536 00:15:11.704 }, 00:15:11.704 { 00:15:11.704 "name": "BaseBdev2", 00:15:11.704 "uuid": "27d6f251-abd4-41db-8891-b8ef4b8ecb92", 00:15:11.704 "is_configured": true, 00:15:11.704 "data_offset": 0, 00:15:11.704 "data_size": 65536 00:15:11.704 }, 00:15:11.704 { 00:15:11.704 "name": "BaseBdev3", 00:15:11.704 "uuid": "63928957-f92a-407f-ba64-1bec193d7386", 00:15:11.704 "is_configured": true, 00:15:11.704 "data_offset": 0, 00:15:11.704 "data_size": 65536 00:15:11.704 }, 00:15:11.704 { 00:15:11.704 "name": "BaseBdev4", 00:15:11.704 "uuid": "6ed8011f-8544-4dd7-b2fa-7768a9a65dde", 00:15:11.704 "is_configured": true, 00:15:11.704 "data_offset": 0, 00:15:11.704 "data_size": 65536 00:15:11.704 } 00:15:11.704 ] 00:15:11.704 }' 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.704 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.268 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:12.269 [2024-11-27 04:36:59.675498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:12.269 "name": "Existed_Raid", 00:15:12.269 "aliases": [ 00:15:12.269 "dc63f84b-65d7-4311-b90e-eb0c2b8ef7ce" 00:15:12.269 ], 00:15:12.269 "product_name": "Raid Volume", 00:15:12.269 "block_size": 512, 00:15:12.269 "num_blocks": 262144, 00:15:12.269 "uuid": "dc63f84b-65d7-4311-b90e-eb0c2b8ef7ce", 00:15:12.269 "assigned_rate_limits": { 00:15:12.269 "rw_ios_per_sec": 0, 00:15:12.269 "rw_mbytes_per_sec": 0, 00:15:12.269 "r_mbytes_per_sec": 0, 00:15:12.269 "w_mbytes_per_sec": 0 00:15:12.269 }, 00:15:12.269 "claimed": false, 00:15:12.269 "zoned": false, 00:15:12.269 "supported_io_types": { 00:15:12.269 "read": true, 00:15:12.269 "write": true, 00:15:12.269 "unmap": true, 00:15:12.269 "flush": true, 00:15:12.269 "reset": true, 00:15:12.269 "nvme_admin": false, 00:15:12.269 "nvme_io": false, 00:15:12.269 "nvme_io_md": false, 00:15:12.269 "write_zeroes": true, 00:15:12.269 "zcopy": false, 00:15:12.269 "get_zone_info": false, 00:15:12.269 "zone_management": false, 00:15:12.269 "zone_append": false, 00:15:12.269 "compare": false, 00:15:12.269 "compare_and_write": false, 00:15:12.269 "abort": false, 00:15:12.269 "seek_hole": false, 00:15:12.269 "seek_data": false, 00:15:12.269 "copy": false, 00:15:12.269 "nvme_iov_md": false 00:15:12.269 }, 00:15:12.269 "memory_domains": [ 00:15:12.269 { 00:15:12.269 "dma_device_id": "system", 00:15:12.269 "dma_device_type": 1 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.269 "dma_device_type": 2 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "dma_device_id": "system", 00:15:12.269 "dma_device_type": 1 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.269 "dma_device_type": 2 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "dma_device_id": "system", 00:15:12.269 "dma_device_type": 1 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.269 "dma_device_type": 2 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "dma_device_id": "system", 00:15:12.269 "dma_device_type": 1 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.269 "dma_device_type": 2 00:15:12.269 } 00:15:12.269 ], 00:15:12.269 "driver_specific": { 00:15:12.269 "raid": { 00:15:12.269 "uuid": "dc63f84b-65d7-4311-b90e-eb0c2b8ef7ce", 00:15:12.269 "strip_size_kb": 64, 00:15:12.269 "state": "online", 00:15:12.269 "raid_level": "raid0", 00:15:12.269 "superblock": false, 00:15:12.269 "num_base_bdevs": 4, 00:15:12.269 "num_base_bdevs_discovered": 4, 00:15:12.269 "num_base_bdevs_operational": 4, 00:15:12.269 "base_bdevs_list": [ 00:15:12.269 { 00:15:12.269 "name": "BaseBdev1", 00:15:12.269 "uuid": "f395dbd1-2703-4929-bb1c-aad0195e7f7a", 00:15:12.269 "is_configured": true, 00:15:12.269 "data_offset": 0, 00:15:12.269 "data_size": 65536 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "name": "BaseBdev2", 00:15:12.269 "uuid": "27d6f251-abd4-41db-8891-b8ef4b8ecb92", 00:15:12.269 "is_configured": true, 00:15:12.269 "data_offset": 0, 00:15:12.269 "data_size": 65536 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "name": "BaseBdev3", 00:15:12.269 "uuid": "63928957-f92a-407f-ba64-1bec193d7386", 00:15:12.269 "is_configured": true, 00:15:12.269 "data_offset": 0, 00:15:12.269 "data_size": 65536 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "name": "BaseBdev4", 00:15:12.269 "uuid": "6ed8011f-8544-4dd7-b2fa-7768a9a65dde", 00:15:12.269 "is_configured": true, 00:15:12.269 "data_offset": 0, 00:15:12.269 "data_size": 65536 00:15:12.269 } 00:15:12.269 ] 00:15:12.269 } 00:15:12.269 } 00:15:12.269 }' 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:12.269 BaseBdev2 00:15:12.269 BaseBdev3 00:15:12.269 BaseBdev4' 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.269 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.527 04:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.527 [2024-11-27 04:36:59.991181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.527 [2024-11-27 04:36:59.992159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.527 [2024-11-27 04:36:59.992249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.527 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.527 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:12.527 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:12.527 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:12.527 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:12.527 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:12.527 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.528 "name": "Existed_Raid", 00:15:12.528 "uuid": "dc63f84b-65d7-4311-b90e-eb0c2b8ef7ce", 00:15:12.528 "strip_size_kb": 64, 00:15:12.528 "state": "offline", 00:15:12.528 "raid_level": "raid0", 00:15:12.528 "superblock": false, 00:15:12.528 "num_base_bdevs": 4, 00:15:12.528 "num_base_bdevs_discovered": 3, 00:15:12.528 "num_base_bdevs_operational": 3, 00:15:12.528 "base_bdevs_list": [ 00:15:12.528 { 00:15:12.528 "name": null, 00:15:12.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.528 "is_configured": false, 00:15:12.528 "data_offset": 0, 00:15:12.528 "data_size": 65536 00:15:12.528 }, 00:15:12.528 { 00:15:12.528 "name": "BaseBdev2", 00:15:12.528 "uuid": "27d6f251-abd4-41db-8891-b8ef4b8ecb92", 00:15:12.528 "is_configured": true, 00:15:12.528 "data_offset": 0, 00:15:12.528 "data_size": 65536 00:15:12.528 }, 00:15:12.528 { 00:15:12.528 "name": "BaseBdev3", 00:15:12.528 "uuid": "63928957-f92a-407f-ba64-1bec193d7386", 00:15:12.528 "is_configured": true, 00:15:12.528 "data_offset": 0, 00:15:12.528 "data_size": 65536 00:15:12.528 }, 00:15:12.528 { 00:15:12.528 "name": "BaseBdev4", 00:15:12.528 "uuid": "6ed8011f-8544-4dd7-b2fa-7768a9a65dde", 00:15:12.528 "is_configured": true, 00:15:12.528 "data_offset": 0, 00:15:12.528 "data_size": 65536 00:15:12.528 } 00:15:12.528 ] 00:15:12.528 }' 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.528 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.095 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.095 [2024-11-27 04:37:00.654148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.352 [2024-11-27 04:37:00.794528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:13.352 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.353 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.353 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.353 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.353 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.353 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.353 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:13.353 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.353 04:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:13.353 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.353 04:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.353 [2024-11-27 04:37:00.930581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:13.353 [2024-11-27 04:37:00.930768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.611 BaseBdev2 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.611 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.612 [ 00:15:13.612 { 00:15:13.612 "name": "BaseBdev2", 00:15:13.612 "aliases": [ 00:15:13.612 "c5ecf916-e812-4f4d-839a-f5e811226b45" 00:15:13.612 ], 00:15:13.612 "product_name": "Malloc disk", 00:15:13.612 "block_size": 512, 00:15:13.612 "num_blocks": 65536, 00:15:13.612 "uuid": "c5ecf916-e812-4f4d-839a-f5e811226b45", 00:15:13.612 "assigned_rate_limits": { 00:15:13.612 "rw_ios_per_sec": 0, 00:15:13.612 "rw_mbytes_per_sec": 0, 00:15:13.612 "r_mbytes_per_sec": 0, 00:15:13.612 "w_mbytes_per_sec": 0 00:15:13.612 }, 00:15:13.612 "claimed": false, 00:15:13.612 "zoned": false, 00:15:13.612 "supported_io_types": { 00:15:13.612 "read": true, 00:15:13.612 "write": true, 00:15:13.612 "unmap": true, 00:15:13.612 "flush": true, 00:15:13.612 "reset": true, 00:15:13.612 "nvme_admin": false, 00:15:13.612 "nvme_io": false, 00:15:13.612 "nvme_io_md": false, 00:15:13.612 "write_zeroes": true, 00:15:13.612 "zcopy": true, 00:15:13.612 "get_zone_info": false, 00:15:13.612 "zone_management": false, 00:15:13.612 "zone_append": false, 00:15:13.612 "compare": false, 00:15:13.612 "compare_and_write": false, 00:15:13.612 "abort": true, 00:15:13.612 "seek_hole": false, 00:15:13.612 "seek_data": false, 00:15:13.612 "copy": true, 00:15:13.612 "nvme_iov_md": false 00:15:13.612 }, 00:15:13.612 "memory_domains": [ 00:15:13.612 { 00:15:13.612 "dma_device_id": "system", 00:15:13.612 "dma_device_type": 1 00:15:13.612 }, 00:15:13.612 { 00:15:13.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.612 "dma_device_type": 2 00:15:13.612 } 00:15:13.612 ], 00:15:13.612 "driver_specific": {} 00:15:13.612 } 00:15:13.612 ] 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.612 BaseBdev3 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.612 [ 00:15:13.612 { 00:15:13.612 "name": "BaseBdev3", 00:15:13.612 "aliases": [ 00:15:13.612 "b96da4d6-175b-42b4-addb-40c9e100a4f2" 00:15:13.612 ], 00:15:13.612 "product_name": "Malloc disk", 00:15:13.612 "block_size": 512, 00:15:13.612 "num_blocks": 65536, 00:15:13.612 "uuid": "b96da4d6-175b-42b4-addb-40c9e100a4f2", 00:15:13.612 "assigned_rate_limits": { 00:15:13.612 "rw_ios_per_sec": 0, 00:15:13.612 "rw_mbytes_per_sec": 0, 00:15:13.612 "r_mbytes_per_sec": 0, 00:15:13.612 "w_mbytes_per_sec": 0 00:15:13.612 }, 00:15:13.612 "claimed": false, 00:15:13.612 "zoned": false, 00:15:13.612 "supported_io_types": { 00:15:13.612 "read": true, 00:15:13.612 "write": true, 00:15:13.612 "unmap": true, 00:15:13.612 "flush": true, 00:15:13.612 "reset": true, 00:15:13.612 "nvme_admin": false, 00:15:13.612 "nvme_io": false, 00:15:13.612 "nvme_io_md": false, 00:15:13.612 "write_zeroes": true, 00:15:13.612 "zcopy": true, 00:15:13.612 "get_zone_info": false, 00:15:13.612 "zone_management": false, 00:15:13.612 "zone_append": false, 00:15:13.612 "compare": false, 00:15:13.612 "compare_and_write": false, 00:15:13.612 "abort": true, 00:15:13.612 "seek_hole": false, 00:15:13.612 "seek_data": false, 00:15:13.612 "copy": true, 00:15:13.612 "nvme_iov_md": false 00:15:13.612 }, 00:15:13.612 "memory_domains": [ 00:15:13.612 { 00:15:13.612 "dma_device_id": "system", 00:15:13.612 "dma_device_type": 1 00:15:13.612 }, 00:15:13.612 { 00:15:13.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.612 "dma_device_type": 2 00:15:13.612 } 00:15:13.612 ], 00:15:13.612 "driver_specific": {} 00:15:13.612 } 00:15:13.612 ] 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:13.612 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.871 BaseBdev4 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.871 [ 00:15:13.871 { 00:15:13.871 "name": "BaseBdev4", 00:15:13.871 "aliases": [ 00:15:13.871 "0d0f7681-096f-497c-9186-c6c7a4c4096c" 00:15:13.871 ], 00:15:13.871 "product_name": "Malloc disk", 00:15:13.871 "block_size": 512, 00:15:13.871 "num_blocks": 65536, 00:15:13.871 "uuid": "0d0f7681-096f-497c-9186-c6c7a4c4096c", 00:15:13.871 "assigned_rate_limits": { 00:15:13.871 "rw_ios_per_sec": 0, 00:15:13.871 "rw_mbytes_per_sec": 0, 00:15:13.871 "r_mbytes_per_sec": 0, 00:15:13.871 "w_mbytes_per_sec": 0 00:15:13.871 }, 00:15:13.871 "claimed": false, 00:15:13.871 "zoned": false, 00:15:13.871 "supported_io_types": { 00:15:13.871 "read": true, 00:15:13.871 "write": true, 00:15:13.871 "unmap": true, 00:15:13.871 "flush": true, 00:15:13.871 "reset": true, 00:15:13.871 "nvme_admin": false, 00:15:13.871 "nvme_io": false, 00:15:13.871 "nvme_io_md": false, 00:15:13.871 "write_zeroes": true, 00:15:13.871 "zcopy": true, 00:15:13.871 "get_zone_info": false, 00:15:13.871 "zone_management": false, 00:15:13.871 "zone_append": false, 00:15:13.871 "compare": false, 00:15:13.871 "compare_and_write": false, 00:15:13.871 "abort": true, 00:15:13.871 "seek_hole": false, 00:15:13.871 "seek_data": false, 00:15:13.871 "copy": true, 00:15:13.871 "nvme_iov_md": false 00:15:13.871 }, 00:15:13.871 "memory_domains": [ 00:15:13.871 { 00:15:13.871 "dma_device_id": "system", 00:15:13.871 "dma_device_type": 1 00:15:13.871 }, 00:15:13.871 { 00:15:13.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.871 "dma_device_type": 2 00:15:13.871 } 00:15:13.871 ], 00:15:13.871 "driver_specific": {} 00:15:13.871 } 00:15:13.871 ] 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.871 [2024-11-27 04:37:01.317000] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.871 [2024-11-27 04:37:01.317178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.871 [2024-11-27 04:37:01.317225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.871 [2024-11-27 04:37:01.319651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.871 [2024-11-27 04:37:01.319726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.871 "name": "Existed_Raid", 00:15:13.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.871 "strip_size_kb": 64, 00:15:13.871 "state": "configuring", 00:15:13.871 "raid_level": "raid0", 00:15:13.871 "superblock": false, 00:15:13.871 "num_base_bdevs": 4, 00:15:13.871 "num_base_bdevs_discovered": 3, 00:15:13.871 "num_base_bdevs_operational": 4, 00:15:13.871 "base_bdevs_list": [ 00:15:13.871 { 00:15:13.871 "name": "BaseBdev1", 00:15:13.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.871 "is_configured": false, 00:15:13.871 "data_offset": 0, 00:15:13.871 "data_size": 0 00:15:13.871 }, 00:15:13.871 { 00:15:13.871 "name": "BaseBdev2", 00:15:13.871 "uuid": "c5ecf916-e812-4f4d-839a-f5e811226b45", 00:15:13.871 "is_configured": true, 00:15:13.871 "data_offset": 0, 00:15:13.871 "data_size": 65536 00:15:13.871 }, 00:15:13.871 { 00:15:13.871 "name": "BaseBdev3", 00:15:13.871 "uuid": "b96da4d6-175b-42b4-addb-40c9e100a4f2", 00:15:13.871 "is_configured": true, 00:15:13.871 "data_offset": 0, 00:15:13.871 "data_size": 65536 00:15:13.871 }, 00:15:13.871 { 00:15:13.871 "name": "BaseBdev4", 00:15:13.871 "uuid": "0d0f7681-096f-497c-9186-c6c7a4c4096c", 00:15:13.871 "is_configured": true, 00:15:13.871 "data_offset": 0, 00:15:13.871 "data_size": 65536 00:15:13.871 } 00:15:13.871 ] 00:15:13.871 }' 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.871 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.472 [2024-11-27 04:37:01.805182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.472 "name": "Existed_Raid", 00:15:14.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.472 "strip_size_kb": 64, 00:15:14.472 "state": "configuring", 00:15:14.472 "raid_level": "raid0", 00:15:14.472 "superblock": false, 00:15:14.472 "num_base_bdevs": 4, 00:15:14.472 "num_base_bdevs_discovered": 2, 00:15:14.472 "num_base_bdevs_operational": 4, 00:15:14.472 "base_bdevs_list": [ 00:15:14.472 { 00:15:14.472 "name": "BaseBdev1", 00:15:14.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.472 "is_configured": false, 00:15:14.472 "data_offset": 0, 00:15:14.472 "data_size": 0 00:15:14.472 }, 00:15:14.472 { 00:15:14.472 "name": null, 00:15:14.472 "uuid": "c5ecf916-e812-4f4d-839a-f5e811226b45", 00:15:14.472 "is_configured": false, 00:15:14.472 "data_offset": 0, 00:15:14.472 "data_size": 65536 00:15:14.472 }, 00:15:14.472 { 00:15:14.472 "name": "BaseBdev3", 00:15:14.472 "uuid": "b96da4d6-175b-42b4-addb-40c9e100a4f2", 00:15:14.472 "is_configured": true, 00:15:14.472 "data_offset": 0, 00:15:14.472 "data_size": 65536 00:15:14.472 }, 00:15:14.472 { 00:15:14.472 "name": "BaseBdev4", 00:15:14.472 "uuid": "0d0f7681-096f-497c-9186-c6c7a4c4096c", 00:15:14.472 "is_configured": true, 00:15:14.472 "data_offset": 0, 00:15:14.472 "data_size": 65536 00:15:14.472 } 00:15:14.472 ] 00:15:14.472 }' 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.472 04:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.039 [2024-11-27 04:37:02.450761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.039 BaseBdev1 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.039 [ 00:15:15.039 { 00:15:15.039 "name": "BaseBdev1", 00:15:15.039 "aliases": [ 00:15:15.039 "1daae5d2-4740-478f-90a6-35f986b2d445" 00:15:15.039 ], 00:15:15.039 "product_name": "Malloc disk", 00:15:15.039 "block_size": 512, 00:15:15.039 "num_blocks": 65536, 00:15:15.039 "uuid": "1daae5d2-4740-478f-90a6-35f986b2d445", 00:15:15.039 "assigned_rate_limits": { 00:15:15.039 "rw_ios_per_sec": 0, 00:15:15.039 "rw_mbytes_per_sec": 0, 00:15:15.039 "r_mbytes_per_sec": 0, 00:15:15.039 "w_mbytes_per_sec": 0 00:15:15.039 }, 00:15:15.039 "claimed": true, 00:15:15.039 "claim_type": "exclusive_write", 00:15:15.039 "zoned": false, 00:15:15.039 "supported_io_types": { 00:15:15.039 "read": true, 00:15:15.039 "write": true, 00:15:15.039 "unmap": true, 00:15:15.039 "flush": true, 00:15:15.039 "reset": true, 00:15:15.039 "nvme_admin": false, 00:15:15.039 "nvme_io": false, 00:15:15.039 "nvme_io_md": false, 00:15:15.039 "write_zeroes": true, 00:15:15.039 "zcopy": true, 00:15:15.039 "get_zone_info": false, 00:15:15.039 "zone_management": false, 00:15:15.039 "zone_append": false, 00:15:15.039 "compare": false, 00:15:15.039 "compare_and_write": false, 00:15:15.039 "abort": true, 00:15:15.039 "seek_hole": false, 00:15:15.039 "seek_data": false, 00:15:15.039 "copy": true, 00:15:15.039 "nvme_iov_md": false 00:15:15.039 }, 00:15:15.039 "memory_domains": [ 00:15:15.039 { 00:15:15.039 "dma_device_id": "system", 00:15:15.039 "dma_device_type": 1 00:15:15.039 }, 00:15:15.039 { 00:15:15.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.039 "dma_device_type": 2 00:15:15.039 } 00:15:15.039 ], 00:15:15.039 "driver_specific": {} 00:15:15.039 } 00:15:15.039 ] 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.039 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.040 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.040 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.040 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.040 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.040 "name": "Existed_Raid", 00:15:15.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.040 "strip_size_kb": 64, 00:15:15.040 "state": "configuring", 00:15:15.040 "raid_level": "raid0", 00:15:15.040 "superblock": false, 00:15:15.040 "num_base_bdevs": 4, 00:15:15.040 "num_base_bdevs_discovered": 3, 00:15:15.040 "num_base_bdevs_operational": 4, 00:15:15.040 "base_bdevs_list": [ 00:15:15.040 { 00:15:15.040 "name": "BaseBdev1", 00:15:15.040 "uuid": "1daae5d2-4740-478f-90a6-35f986b2d445", 00:15:15.040 "is_configured": true, 00:15:15.040 "data_offset": 0, 00:15:15.040 "data_size": 65536 00:15:15.040 }, 00:15:15.040 { 00:15:15.040 "name": null, 00:15:15.040 "uuid": "c5ecf916-e812-4f4d-839a-f5e811226b45", 00:15:15.040 "is_configured": false, 00:15:15.040 "data_offset": 0, 00:15:15.040 "data_size": 65536 00:15:15.040 }, 00:15:15.040 { 00:15:15.040 "name": "BaseBdev3", 00:15:15.040 "uuid": "b96da4d6-175b-42b4-addb-40c9e100a4f2", 00:15:15.040 "is_configured": true, 00:15:15.040 "data_offset": 0, 00:15:15.040 "data_size": 65536 00:15:15.040 }, 00:15:15.040 { 00:15:15.040 "name": "BaseBdev4", 00:15:15.040 "uuid": "0d0f7681-096f-497c-9186-c6c7a4c4096c", 00:15:15.040 "is_configured": true, 00:15:15.040 "data_offset": 0, 00:15:15.040 "data_size": 65536 00:15:15.040 } 00:15:15.040 ] 00:15:15.040 }' 00:15:15.040 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.040 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.606 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.606 04:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:15.606 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.606 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.606 04:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.606 [2024-11-27 04:37:03.035014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.606 "name": "Existed_Raid", 00:15:15.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.606 "strip_size_kb": 64, 00:15:15.606 "state": "configuring", 00:15:15.606 "raid_level": "raid0", 00:15:15.606 "superblock": false, 00:15:15.606 "num_base_bdevs": 4, 00:15:15.606 "num_base_bdevs_discovered": 2, 00:15:15.606 "num_base_bdevs_operational": 4, 00:15:15.606 "base_bdevs_list": [ 00:15:15.606 { 00:15:15.606 "name": "BaseBdev1", 00:15:15.606 "uuid": "1daae5d2-4740-478f-90a6-35f986b2d445", 00:15:15.606 "is_configured": true, 00:15:15.606 "data_offset": 0, 00:15:15.606 "data_size": 65536 00:15:15.606 }, 00:15:15.606 { 00:15:15.606 "name": null, 00:15:15.606 "uuid": "c5ecf916-e812-4f4d-839a-f5e811226b45", 00:15:15.606 "is_configured": false, 00:15:15.606 "data_offset": 0, 00:15:15.606 "data_size": 65536 00:15:15.606 }, 00:15:15.606 { 00:15:15.606 "name": null, 00:15:15.606 "uuid": "b96da4d6-175b-42b4-addb-40c9e100a4f2", 00:15:15.606 "is_configured": false, 00:15:15.606 "data_offset": 0, 00:15:15.606 "data_size": 65536 00:15:15.606 }, 00:15:15.606 { 00:15:15.606 "name": "BaseBdev4", 00:15:15.606 "uuid": "0d0f7681-096f-497c-9186-c6c7a4c4096c", 00:15:15.606 "is_configured": true, 00:15:15.606 "data_offset": 0, 00:15:15.606 "data_size": 65536 00:15:15.606 } 00:15:15.606 ] 00:15:15.606 }' 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.606 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.176 [2024-11-27 04:37:03.659167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.176 "name": "Existed_Raid", 00:15:16.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.176 "strip_size_kb": 64, 00:15:16.176 "state": "configuring", 00:15:16.176 "raid_level": "raid0", 00:15:16.176 "superblock": false, 00:15:16.176 "num_base_bdevs": 4, 00:15:16.176 "num_base_bdevs_discovered": 3, 00:15:16.176 "num_base_bdevs_operational": 4, 00:15:16.176 "base_bdevs_list": [ 00:15:16.176 { 00:15:16.176 "name": "BaseBdev1", 00:15:16.176 "uuid": "1daae5d2-4740-478f-90a6-35f986b2d445", 00:15:16.176 "is_configured": true, 00:15:16.176 "data_offset": 0, 00:15:16.176 "data_size": 65536 00:15:16.176 }, 00:15:16.176 { 00:15:16.176 "name": null, 00:15:16.176 "uuid": "c5ecf916-e812-4f4d-839a-f5e811226b45", 00:15:16.176 "is_configured": false, 00:15:16.176 "data_offset": 0, 00:15:16.176 "data_size": 65536 00:15:16.176 }, 00:15:16.176 { 00:15:16.176 "name": "BaseBdev3", 00:15:16.176 "uuid": "b96da4d6-175b-42b4-addb-40c9e100a4f2", 00:15:16.176 "is_configured": true, 00:15:16.176 "data_offset": 0, 00:15:16.176 "data_size": 65536 00:15:16.176 }, 00:15:16.176 { 00:15:16.176 "name": "BaseBdev4", 00:15:16.176 "uuid": "0d0f7681-096f-497c-9186-c6c7a4c4096c", 00:15:16.176 "is_configured": true, 00:15:16.176 "data_offset": 0, 00:15:16.176 "data_size": 65536 00:15:16.176 } 00:15:16.176 ] 00:15:16.176 }' 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.176 04:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.741 [2024-11-27 04:37:04.243347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.741 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.742 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.742 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.742 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.742 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.742 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.742 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.742 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.742 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.742 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.000 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.000 "name": "Existed_Raid", 00:15:17.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.000 "strip_size_kb": 64, 00:15:17.000 "state": "configuring", 00:15:17.000 "raid_level": "raid0", 00:15:17.000 "superblock": false, 00:15:17.000 "num_base_bdevs": 4, 00:15:17.000 "num_base_bdevs_discovered": 2, 00:15:17.000 "num_base_bdevs_operational": 4, 00:15:17.000 "base_bdevs_list": [ 00:15:17.000 { 00:15:17.000 "name": null, 00:15:17.000 "uuid": "1daae5d2-4740-478f-90a6-35f986b2d445", 00:15:17.000 "is_configured": false, 00:15:17.000 "data_offset": 0, 00:15:17.000 "data_size": 65536 00:15:17.000 }, 00:15:17.000 { 00:15:17.000 "name": null, 00:15:17.000 "uuid": "c5ecf916-e812-4f4d-839a-f5e811226b45", 00:15:17.000 "is_configured": false, 00:15:17.000 "data_offset": 0, 00:15:17.000 "data_size": 65536 00:15:17.000 }, 00:15:17.000 { 00:15:17.000 "name": "BaseBdev3", 00:15:17.000 "uuid": "b96da4d6-175b-42b4-addb-40c9e100a4f2", 00:15:17.000 "is_configured": true, 00:15:17.000 "data_offset": 0, 00:15:17.000 "data_size": 65536 00:15:17.000 }, 00:15:17.000 { 00:15:17.000 "name": "BaseBdev4", 00:15:17.000 "uuid": "0d0f7681-096f-497c-9186-c6c7a4c4096c", 00:15:17.000 "is_configured": true, 00:15:17.000 "data_offset": 0, 00:15:17.000 "data_size": 65536 00:15:17.000 } 00:15:17.000 ] 00:15:17.000 }' 00:15:17.000 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.000 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.258 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.258 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:17.258 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.258 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.258 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.258 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:17.258 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:17.258 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.258 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.516 [2024-11-27 04:37:04.879960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.516 "name": "Existed_Raid", 00:15:17.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.516 "strip_size_kb": 64, 00:15:17.516 "state": "configuring", 00:15:17.516 "raid_level": "raid0", 00:15:17.516 "superblock": false, 00:15:17.516 "num_base_bdevs": 4, 00:15:17.516 "num_base_bdevs_discovered": 3, 00:15:17.516 "num_base_bdevs_operational": 4, 00:15:17.516 "base_bdevs_list": [ 00:15:17.516 { 00:15:17.516 "name": null, 00:15:17.516 "uuid": "1daae5d2-4740-478f-90a6-35f986b2d445", 00:15:17.516 "is_configured": false, 00:15:17.516 "data_offset": 0, 00:15:17.516 "data_size": 65536 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "name": "BaseBdev2", 00:15:17.516 "uuid": "c5ecf916-e812-4f4d-839a-f5e811226b45", 00:15:17.516 "is_configured": true, 00:15:17.516 "data_offset": 0, 00:15:17.516 "data_size": 65536 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "name": "BaseBdev3", 00:15:17.516 "uuid": "b96da4d6-175b-42b4-addb-40c9e100a4f2", 00:15:17.516 "is_configured": true, 00:15:17.516 "data_offset": 0, 00:15:17.516 "data_size": 65536 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "name": "BaseBdev4", 00:15:17.516 "uuid": "0d0f7681-096f-497c-9186-c6c7a4c4096c", 00:15:17.516 "is_configured": true, 00:15:17.516 "data_offset": 0, 00:15:17.516 "data_size": 65536 00:15:17.516 } 00:15:17.516 ] 00:15:17.516 }' 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.516 04:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.774 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.774 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:17.774 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1daae5d2-4740-478f-90a6-35f986b2d445 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.032 [2024-11-27 04:37:05.529654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:18.032 [2024-11-27 04:37:05.529731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:18.032 [2024-11-27 04:37:05.529748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:18.032 [2024-11-27 04:37:05.530158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:18.032 [2024-11-27 04:37:05.530378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:18.032 [2024-11-27 04:37:05.530403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:18.032 [2024-11-27 04:37:05.530765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.032 NewBaseBdev 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.032 [ 00:15:18.032 { 00:15:18.032 "name": "NewBaseBdev", 00:15:18.032 "aliases": [ 00:15:18.032 "1daae5d2-4740-478f-90a6-35f986b2d445" 00:15:18.032 ], 00:15:18.032 "product_name": "Malloc disk", 00:15:18.032 "block_size": 512, 00:15:18.032 "num_blocks": 65536, 00:15:18.032 "uuid": "1daae5d2-4740-478f-90a6-35f986b2d445", 00:15:18.032 "assigned_rate_limits": { 00:15:18.032 "rw_ios_per_sec": 0, 00:15:18.032 "rw_mbytes_per_sec": 0, 00:15:18.032 "r_mbytes_per_sec": 0, 00:15:18.032 "w_mbytes_per_sec": 0 00:15:18.032 }, 00:15:18.032 "claimed": true, 00:15:18.032 "claim_type": "exclusive_write", 00:15:18.032 "zoned": false, 00:15:18.032 "supported_io_types": { 00:15:18.032 "read": true, 00:15:18.032 "write": true, 00:15:18.032 "unmap": true, 00:15:18.032 "flush": true, 00:15:18.032 "reset": true, 00:15:18.032 "nvme_admin": false, 00:15:18.032 "nvme_io": false, 00:15:18.032 "nvme_io_md": false, 00:15:18.032 "write_zeroes": true, 00:15:18.032 "zcopy": true, 00:15:18.032 "get_zone_info": false, 00:15:18.032 "zone_management": false, 00:15:18.032 "zone_append": false, 00:15:18.032 "compare": false, 00:15:18.032 "compare_and_write": false, 00:15:18.032 "abort": true, 00:15:18.032 "seek_hole": false, 00:15:18.032 "seek_data": false, 00:15:18.032 "copy": true, 00:15:18.032 "nvme_iov_md": false 00:15:18.032 }, 00:15:18.032 "memory_domains": [ 00:15:18.032 { 00:15:18.032 "dma_device_id": "system", 00:15:18.032 "dma_device_type": 1 00:15:18.032 }, 00:15:18.032 { 00:15:18.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.032 "dma_device_type": 2 00:15:18.032 } 00:15:18.032 ], 00:15:18.032 "driver_specific": {} 00:15:18.032 } 00:15:18.032 ] 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.032 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.032 "name": "Existed_Raid", 00:15:18.032 "uuid": "3b6629e5-14b5-4d02-9372-76a216c3597a", 00:15:18.032 "strip_size_kb": 64, 00:15:18.032 "state": "online", 00:15:18.032 "raid_level": "raid0", 00:15:18.032 "superblock": false, 00:15:18.032 "num_base_bdevs": 4, 00:15:18.032 "num_base_bdevs_discovered": 4, 00:15:18.032 "num_base_bdevs_operational": 4, 00:15:18.032 "base_bdevs_list": [ 00:15:18.032 { 00:15:18.033 "name": "NewBaseBdev", 00:15:18.033 "uuid": "1daae5d2-4740-478f-90a6-35f986b2d445", 00:15:18.033 "is_configured": true, 00:15:18.033 "data_offset": 0, 00:15:18.033 "data_size": 65536 00:15:18.033 }, 00:15:18.033 { 00:15:18.033 "name": "BaseBdev2", 00:15:18.033 "uuid": "c5ecf916-e812-4f4d-839a-f5e811226b45", 00:15:18.033 "is_configured": true, 00:15:18.033 "data_offset": 0, 00:15:18.033 "data_size": 65536 00:15:18.033 }, 00:15:18.033 { 00:15:18.033 "name": "BaseBdev3", 00:15:18.033 "uuid": "b96da4d6-175b-42b4-addb-40c9e100a4f2", 00:15:18.033 "is_configured": true, 00:15:18.033 "data_offset": 0, 00:15:18.033 "data_size": 65536 00:15:18.033 }, 00:15:18.033 { 00:15:18.033 "name": "BaseBdev4", 00:15:18.033 "uuid": "0d0f7681-096f-497c-9186-c6c7a4c4096c", 00:15:18.033 "is_configured": true, 00:15:18.033 "data_offset": 0, 00:15:18.033 "data_size": 65536 00:15:18.033 } 00:15:18.033 ] 00:15:18.033 }' 00:15:18.033 04:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.033 04:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.599 [2024-11-27 04:37:06.066290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:18.599 "name": "Existed_Raid", 00:15:18.599 "aliases": [ 00:15:18.599 "3b6629e5-14b5-4d02-9372-76a216c3597a" 00:15:18.599 ], 00:15:18.599 "product_name": "Raid Volume", 00:15:18.599 "block_size": 512, 00:15:18.599 "num_blocks": 262144, 00:15:18.599 "uuid": "3b6629e5-14b5-4d02-9372-76a216c3597a", 00:15:18.599 "assigned_rate_limits": { 00:15:18.599 "rw_ios_per_sec": 0, 00:15:18.599 "rw_mbytes_per_sec": 0, 00:15:18.599 "r_mbytes_per_sec": 0, 00:15:18.599 "w_mbytes_per_sec": 0 00:15:18.599 }, 00:15:18.599 "claimed": false, 00:15:18.599 "zoned": false, 00:15:18.599 "supported_io_types": { 00:15:18.599 "read": true, 00:15:18.599 "write": true, 00:15:18.599 "unmap": true, 00:15:18.599 "flush": true, 00:15:18.599 "reset": true, 00:15:18.599 "nvme_admin": false, 00:15:18.599 "nvme_io": false, 00:15:18.599 "nvme_io_md": false, 00:15:18.599 "write_zeroes": true, 00:15:18.599 "zcopy": false, 00:15:18.599 "get_zone_info": false, 00:15:18.599 "zone_management": false, 00:15:18.599 "zone_append": false, 00:15:18.599 "compare": false, 00:15:18.599 "compare_and_write": false, 00:15:18.599 "abort": false, 00:15:18.599 "seek_hole": false, 00:15:18.599 "seek_data": false, 00:15:18.599 "copy": false, 00:15:18.599 "nvme_iov_md": false 00:15:18.599 }, 00:15:18.599 "memory_domains": [ 00:15:18.599 { 00:15:18.599 "dma_device_id": "system", 00:15:18.599 "dma_device_type": 1 00:15:18.599 }, 00:15:18.599 { 00:15:18.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.599 "dma_device_type": 2 00:15:18.599 }, 00:15:18.599 { 00:15:18.599 "dma_device_id": "system", 00:15:18.599 "dma_device_type": 1 00:15:18.599 }, 00:15:18.599 { 00:15:18.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.599 "dma_device_type": 2 00:15:18.599 }, 00:15:18.599 { 00:15:18.599 "dma_device_id": "system", 00:15:18.599 "dma_device_type": 1 00:15:18.599 }, 00:15:18.599 { 00:15:18.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.599 "dma_device_type": 2 00:15:18.599 }, 00:15:18.599 { 00:15:18.599 "dma_device_id": "system", 00:15:18.599 "dma_device_type": 1 00:15:18.599 }, 00:15:18.599 { 00:15:18.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.599 "dma_device_type": 2 00:15:18.599 } 00:15:18.599 ], 00:15:18.599 "driver_specific": { 00:15:18.599 "raid": { 00:15:18.599 "uuid": "3b6629e5-14b5-4d02-9372-76a216c3597a", 00:15:18.599 "strip_size_kb": 64, 00:15:18.599 "state": "online", 00:15:18.599 "raid_level": "raid0", 00:15:18.599 "superblock": false, 00:15:18.599 "num_base_bdevs": 4, 00:15:18.599 "num_base_bdevs_discovered": 4, 00:15:18.599 "num_base_bdevs_operational": 4, 00:15:18.599 "base_bdevs_list": [ 00:15:18.599 { 00:15:18.599 "name": "NewBaseBdev", 00:15:18.599 "uuid": "1daae5d2-4740-478f-90a6-35f986b2d445", 00:15:18.599 "is_configured": true, 00:15:18.599 "data_offset": 0, 00:15:18.599 "data_size": 65536 00:15:18.599 }, 00:15:18.599 { 00:15:18.599 "name": "BaseBdev2", 00:15:18.599 "uuid": "c5ecf916-e812-4f4d-839a-f5e811226b45", 00:15:18.599 "is_configured": true, 00:15:18.599 "data_offset": 0, 00:15:18.599 "data_size": 65536 00:15:18.599 }, 00:15:18.599 { 00:15:18.599 "name": "BaseBdev3", 00:15:18.599 "uuid": "b96da4d6-175b-42b4-addb-40c9e100a4f2", 00:15:18.599 "is_configured": true, 00:15:18.599 "data_offset": 0, 00:15:18.599 "data_size": 65536 00:15:18.599 }, 00:15:18.599 { 00:15:18.599 "name": "BaseBdev4", 00:15:18.599 "uuid": "0d0f7681-096f-497c-9186-c6c7a4c4096c", 00:15:18.599 "is_configured": true, 00:15:18.599 "data_offset": 0, 00:15:18.599 "data_size": 65536 00:15:18.599 } 00:15:18.599 ] 00:15:18.599 } 00:15:18.599 } 00:15:18.599 }' 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:18.599 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:18.599 BaseBdev2 00:15:18.599 BaseBdev3 00:15:18.600 BaseBdev4' 00:15:18.600 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.857 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.858 [2024-11-27 04:37:06.437909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.858 [2024-11-27 04:37:06.437947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.858 [2024-11-27 04:37:06.438072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.858 [2024-11-27 04:37:06.438164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.858 [2024-11-27 04:37:06.438181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69578 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69578 ']' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69578 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69578 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69578' 00:15:18.858 killing process with pid 69578 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69578 00:15:18.858 [2024-11-27 04:37:06.477583] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.858 04:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69578 00:15:19.422 [2024-11-27 04:37:06.824499] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.364 04:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:20.364 00:15:20.364 real 0m12.635s 00:15:20.364 user 0m20.993s 00:15:20.364 sys 0m1.604s 00:15:20.364 04:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.364 04:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.364 ************************************ 00:15:20.364 END TEST raid_state_function_test 00:15:20.364 ************************************ 00:15:20.364 04:37:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:20.364 04:37:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:20.364 04:37:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.364 04:37:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.364 ************************************ 00:15:20.364 START TEST raid_state_function_test_sb 00:15:20.364 ************************************ 00:15:20.364 04:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:15:20.364 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:20.364 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:20.364 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:20.365 Process raid pid: 70262 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70262 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70262' 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70262 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70262 ']' 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.365 04:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.623 [2024-11-27 04:37:08.038028] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:15:20.623 [2024-11-27 04:37:08.038235] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.623 [2024-11-27 04:37:08.230818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.881 [2024-11-27 04:37:08.391229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.139 [2024-11-27 04:37:08.597281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.139 [2024-11-27 04:37:08.597338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.397 04:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.397 04:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:21.397 04:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:21.397 04:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.397 04:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.397 [2024-11-27 04:37:09.000859] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.398 [2024-11-27 04:37:09.000926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.398 [2024-11-27 04:37:09.000961] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.398 [2024-11-27 04:37:09.000987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.398 [2024-11-27 04:37:09.001002] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.398 [2024-11-27 04:37:09.001025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.398 [2024-11-27 04:37:09.001039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:21.398 [2024-11-27 04:37:09.001060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.398 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.656 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.656 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.656 "name": "Existed_Raid", 00:15:21.656 "uuid": "3efd358a-df53-48ce-a2d6-120087c682d7", 00:15:21.656 "strip_size_kb": 64, 00:15:21.656 "state": "configuring", 00:15:21.656 "raid_level": "raid0", 00:15:21.656 "superblock": true, 00:15:21.656 "num_base_bdevs": 4, 00:15:21.656 "num_base_bdevs_discovered": 0, 00:15:21.656 "num_base_bdevs_operational": 4, 00:15:21.656 "base_bdevs_list": [ 00:15:21.656 { 00:15:21.656 "name": "BaseBdev1", 00:15:21.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.656 "is_configured": false, 00:15:21.656 "data_offset": 0, 00:15:21.656 "data_size": 0 00:15:21.656 }, 00:15:21.656 { 00:15:21.656 "name": "BaseBdev2", 00:15:21.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.656 "is_configured": false, 00:15:21.656 "data_offset": 0, 00:15:21.656 "data_size": 0 00:15:21.656 }, 00:15:21.656 { 00:15:21.656 "name": "BaseBdev3", 00:15:21.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.656 "is_configured": false, 00:15:21.656 "data_offset": 0, 00:15:21.656 "data_size": 0 00:15:21.656 }, 00:15:21.656 { 00:15:21.656 "name": "BaseBdev4", 00:15:21.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.656 "is_configured": false, 00:15:21.656 "data_offset": 0, 00:15:21.656 "data_size": 0 00:15:21.656 } 00:15:21.656 ] 00:15:21.656 }' 00:15:21.656 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.656 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.915 [2024-11-27 04:37:09.512931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.915 [2024-11-27 04:37:09.512978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.915 [2024-11-27 04:37:09.520928] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.915 [2024-11-27 04:37:09.520992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.915 [2024-11-27 04:37:09.521016] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.915 [2024-11-27 04:37:09.521041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.915 [2024-11-27 04:37:09.521056] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.915 [2024-11-27 04:37:09.521077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.915 [2024-11-27 04:37:09.521091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:21.915 [2024-11-27 04:37:09.521114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.915 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.175 [2024-11-27 04:37:09.565839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.175 BaseBdev1 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.175 [ 00:15:22.175 { 00:15:22.175 "name": "BaseBdev1", 00:15:22.175 "aliases": [ 00:15:22.175 "f5861720-5587-4a0b-ad5a-8ec8b5a7adee" 00:15:22.175 ], 00:15:22.175 "product_name": "Malloc disk", 00:15:22.175 "block_size": 512, 00:15:22.175 "num_blocks": 65536, 00:15:22.175 "uuid": "f5861720-5587-4a0b-ad5a-8ec8b5a7adee", 00:15:22.175 "assigned_rate_limits": { 00:15:22.175 "rw_ios_per_sec": 0, 00:15:22.175 "rw_mbytes_per_sec": 0, 00:15:22.175 "r_mbytes_per_sec": 0, 00:15:22.175 "w_mbytes_per_sec": 0 00:15:22.175 }, 00:15:22.175 "claimed": true, 00:15:22.175 "claim_type": "exclusive_write", 00:15:22.175 "zoned": false, 00:15:22.175 "supported_io_types": { 00:15:22.175 "read": true, 00:15:22.175 "write": true, 00:15:22.175 "unmap": true, 00:15:22.175 "flush": true, 00:15:22.175 "reset": true, 00:15:22.175 "nvme_admin": false, 00:15:22.175 "nvme_io": false, 00:15:22.175 "nvme_io_md": false, 00:15:22.175 "write_zeroes": true, 00:15:22.175 "zcopy": true, 00:15:22.175 "get_zone_info": false, 00:15:22.175 "zone_management": false, 00:15:22.175 "zone_append": false, 00:15:22.175 "compare": false, 00:15:22.175 "compare_and_write": false, 00:15:22.175 "abort": true, 00:15:22.175 "seek_hole": false, 00:15:22.175 "seek_data": false, 00:15:22.175 "copy": true, 00:15:22.175 "nvme_iov_md": false 00:15:22.175 }, 00:15:22.175 "memory_domains": [ 00:15:22.175 { 00:15:22.175 "dma_device_id": "system", 00:15:22.175 "dma_device_type": 1 00:15:22.175 }, 00:15:22.175 { 00:15:22.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.175 "dma_device_type": 2 00:15:22.175 } 00:15:22.175 ], 00:15:22.175 "driver_specific": {} 00:15:22.175 } 00:15:22.175 ] 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.175 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.176 "name": "Existed_Raid", 00:15:22.176 "uuid": "92aeef03-41b8-48a0-bb69-c1a6410edb87", 00:15:22.176 "strip_size_kb": 64, 00:15:22.176 "state": "configuring", 00:15:22.176 "raid_level": "raid0", 00:15:22.176 "superblock": true, 00:15:22.176 "num_base_bdevs": 4, 00:15:22.176 "num_base_bdevs_discovered": 1, 00:15:22.176 "num_base_bdevs_operational": 4, 00:15:22.176 "base_bdevs_list": [ 00:15:22.176 { 00:15:22.176 "name": "BaseBdev1", 00:15:22.176 "uuid": "f5861720-5587-4a0b-ad5a-8ec8b5a7adee", 00:15:22.176 "is_configured": true, 00:15:22.176 "data_offset": 2048, 00:15:22.176 "data_size": 63488 00:15:22.176 }, 00:15:22.176 { 00:15:22.176 "name": "BaseBdev2", 00:15:22.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.176 "is_configured": false, 00:15:22.176 "data_offset": 0, 00:15:22.176 "data_size": 0 00:15:22.176 }, 00:15:22.176 { 00:15:22.176 "name": "BaseBdev3", 00:15:22.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.176 "is_configured": false, 00:15:22.176 "data_offset": 0, 00:15:22.176 "data_size": 0 00:15:22.176 }, 00:15:22.176 { 00:15:22.176 "name": "BaseBdev4", 00:15:22.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.176 "is_configured": false, 00:15:22.176 "data_offset": 0, 00:15:22.176 "data_size": 0 00:15:22.176 } 00:15:22.176 ] 00:15:22.176 }' 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.176 04:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.743 [2024-11-27 04:37:10.146059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.743 [2024-11-27 04:37:10.146124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.743 [2024-11-27 04:37:10.154123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.743 [2024-11-27 04:37:10.156619] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.743 [2024-11-27 04:37:10.156795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.743 [2024-11-27 04:37:10.156924] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.743 [2024-11-27 04:37:10.156985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.743 [2024-11-27 04:37:10.157192] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:22.743 [2024-11-27 04:37:10.157250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.743 "name": "Existed_Raid", 00:15:22.743 "uuid": "cf832a69-5255-4412-9eea-813ac1fac7af", 00:15:22.743 "strip_size_kb": 64, 00:15:22.743 "state": "configuring", 00:15:22.743 "raid_level": "raid0", 00:15:22.743 "superblock": true, 00:15:22.743 "num_base_bdevs": 4, 00:15:22.743 "num_base_bdevs_discovered": 1, 00:15:22.743 "num_base_bdevs_operational": 4, 00:15:22.743 "base_bdevs_list": [ 00:15:22.743 { 00:15:22.743 "name": "BaseBdev1", 00:15:22.743 "uuid": "f5861720-5587-4a0b-ad5a-8ec8b5a7adee", 00:15:22.743 "is_configured": true, 00:15:22.743 "data_offset": 2048, 00:15:22.743 "data_size": 63488 00:15:22.743 }, 00:15:22.743 { 00:15:22.743 "name": "BaseBdev2", 00:15:22.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.743 "is_configured": false, 00:15:22.743 "data_offset": 0, 00:15:22.743 "data_size": 0 00:15:22.743 }, 00:15:22.743 { 00:15:22.743 "name": "BaseBdev3", 00:15:22.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.743 "is_configured": false, 00:15:22.743 "data_offset": 0, 00:15:22.743 "data_size": 0 00:15:22.743 }, 00:15:22.743 { 00:15:22.743 "name": "BaseBdev4", 00:15:22.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.743 "is_configured": false, 00:15:22.743 "data_offset": 0, 00:15:22.743 "data_size": 0 00:15:22.743 } 00:15:22.743 ] 00:15:22.743 }' 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.743 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.310 [2024-11-27 04:37:10.708120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.310 BaseBdev2 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.310 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.311 [ 00:15:23.311 { 00:15:23.311 "name": "BaseBdev2", 00:15:23.311 "aliases": [ 00:15:23.311 "d04a431e-bede-4138-a8e8-f0599ea5b801" 00:15:23.311 ], 00:15:23.311 "product_name": "Malloc disk", 00:15:23.311 "block_size": 512, 00:15:23.311 "num_blocks": 65536, 00:15:23.311 "uuid": "d04a431e-bede-4138-a8e8-f0599ea5b801", 00:15:23.311 "assigned_rate_limits": { 00:15:23.311 "rw_ios_per_sec": 0, 00:15:23.311 "rw_mbytes_per_sec": 0, 00:15:23.311 "r_mbytes_per_sec": 0, 00:15:23.311 "w_mbytes_per_sec": 0 00:15:23.311 }, 00:15:23.311 "claimed": true, 00:15:23.311 "claim_type": "exclusive_write", 00:15:23.311 "zoned": false, 00:15:23.311 "supported_io_types": { 00:15:23.311 "read": true, 00:15:23.311 "write": true, 00:15:23.311 "unmap": true, 00:15:23.311 "flush": true, 00:15:23.311 "reset": true, 00:15:23.311 "nvme_admin": false, 00:15:23.311 "nvme_io": false, 00:15:23.311 "nvme_io_md": false, 00:15:23.311 "write_zeroes": true, 00:15:23.311 "zcopy": true, 00:15:23.311 "get_zone_info": false, 00:15:23.311 "zone_management": false, 00:15:23.311 "zone_append": false, 00:15:23.311 "compare": false, 00:15:23.311 "compare_and_write": false, 00:15:23.311 "abort": true, 00:15:23.311 "seek_hole": false, 00:15:23.311 "seek_data": false, 00:15:23.311 "copy": true, 00:15:23.311 "nvme_iov_md": false 00:15:23.311 }, 00:15:23.311 "memory_domains": [ 00:15:23.311 { 00:15:23.311 "dma_device_id": "system", 00:15:23.311 "dma_device_type": 1 00:15:23.311 }, 00:15:23.311 { 00:15:23.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.311 "dma_device_type": 2 00:15:23.311 } 00:15:23.311 ], 00:15:23.311 "driver_specific": {} 00:15:23.311 } 00:15:23.311 ] 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.311 "name": "Existed_Raid", 00:15:23.311 "uuid": "cf832a69-5255-4412-9eea-813ac1fac7af", 00:15:23.311 "strip_size_kb": 64, 00:15:23.311 "state": "configuring", 00:15:23.311 "raid_level": "raid0", 00:15:23.311 "superblock": true, 00:15:23.311 "num_base_bdevs": 4, 00:15:23.311 "num_base_bdevs_discovered": 2, 00:15:23.311 "num_base_bdevs_operational": 4, 00:15:23.311 "base_bdevs_list": [ 00:15:23.311 { 00:15:23.311 "name": "BaseBdev1", 00:15:23.311 "uuid": "f5861720-5587-4a0b-ad5a-8ec8b5a7adee", 00:15:23.311 "is_configured": true, 00:15:23.311 "data_offset": 2048, 00:15:23.311 "data_size": 63488 00:15:23.311 }, 00:15:23.311 { 00:15:23.311 "name": "BaseBdev2", 00:15:23.311 "uuid": "d04a431e-bede-4138-a8e8-f0599ea5b801", 00:15:23.311 "is_configured": true, 00:15:23.311 "data_offset": 2048, 00:15:23.311 "data_size": 63488 00:15:23.311 }, 00:15:23.311 { 00:15:23.311 "name": "BaseBdev3", 00:15:23.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.311 "is_configured": false, 00:15:23.311 "data_offset": 0, 00:15:23.311 "data_size": 0 00:15:23.311 }, 00:15:23.311 { 00:15:23.311 "name": "BaseBdev4", 00:15:23.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.311 "is_configured": false, 00:15:23.311 "data_offset": 0, 00:15:23.311 "data_size": 0 00:15:23.311 } 00:15:23.311 ] 00:15:23.311 }' 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.311 04:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.877 [2024-11-27 04:37:11.291011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.877 BaseBdev3 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.877 [ 00:15:23.877 { 00:15:23.877 "name": "BaseBdev3", 00:15:23.877 "aliases": [ 00:15:23.877 "1f25db5a-4019-4031-8539-faace252675e" 00:15:23.877 ], 00:15:23.877 "product_name": "Malloc disk", 00:15:23.877 "block_size": 512, 00:15:23.877 "num_blocks": 65536, 00:15:23.877 "uuid": "1f25db5a-4019-4031-8539-faace252675e", 00:15:23.877 "assigned_rate_limits": { 00:15:23.877 "rw_ios_per_sec": 0, 00:15:23.877 "rw_mbytes_per_sec": 0, 00:15:23.877 "r_mbytes_per_sec": 0, 00:15:23.877 "w_mbytes_per_sec": 0 00:15:23.877 }, 00:15:23.877 "claimed": true, 00:15:23.877 "claim_type": "exclusive_write", 00:15:23.877 "zoned": false, 00:15:23.877 "supported_io_types": { 00:15:23.877 "read": true, 00:15:23.877 "write": true, 00:15:23.877 "unmap": true, 00:15:23.877 "flush": true, 00:15:23.877 "reset": true, 00:15:23.877 "nvme_admin": false, 00:15:23.877 "nvme_io": false, 00:15:23.877 "nvme_io_md": false, 00:15:23.877 "write_zeroes": true, 00:15:23.877 "zcopy": true, 00:15:23.877 "get_zone_info": false, 00:15:23.877 "zone_management": false, 00:15:23.877 "zone_append": false, 00:15:23.877 "compare": false, 00:15:23.877 "compare_and_write": false, 00:15:23.877 "abort": true, 00:15:23.877 "seek_hole": false, 00:15:23.877 "seek_data": false, 00:15:23.877 "copy": true, 00:15:23.877 "nvme_iov_md": false 00:15:23.877 }, 00:15:23.877 "memory_domains": [ 00:15:23.877 { 00:15:23.877 "dma_device_id": "system", 00:15:23.877 "dma_device_type": 1 00:15:23.877 }, 00:15:23.877 { 00:15:23.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.877 "dma_device_type": 2 00:15:23.877 } 00:15:23.877 ], 00:15:23.877 "driver_specific": {} 00:15:23.877 } 00:15:23.877 ] 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:23.877 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.878 "name": "Existed_Raid", 00:15:23.878 "uuid": "cf832a69-5255-4412-9eea-813ac1fac7af", 00:15:23.878 "strip_size_kb": 64, 00:15:23.878 "state": "configuring", 00:15:23.878 "raid_level": "raid0", 00:15:23.878 "superblock": true, 00:15:23.878 "num_base_bdevs": 4, 00:15:23.878 "num_base_bdevs_discovered": 3, 00:15:23.878 "num_base_bdevs_operational": 4, 00:15:23.878 "base_bdevs_list": [ 00:15:23.878 { 00:15:23.878 "name": "BaseBdev1", 00:15:23.878 "uuid": "f5861720-5587-4a0b-ad5a-8ec8b5a7adee", 00:15:23.878 "is_configured": true, 00:15:23.878 "data_offset": 2048, 00:15:23.878 "data_size": 63488 00:15:23.878 }, 00:15:23.878 { 00:15:23.878 "name": "BaseBdev2", 00:15:23.878 "uuid": "d04a431e-bede-4138-a8e8-f0599ea5b801", 00:15:23.878 "is_configured": true, 00:15:23.878 "data_offset": 2048, 00:15:23.878 "data_size": 63488 00:15:23.878 }, 00:15:23.878 { 00:15:23.878 "name": "BaseBdev3", 00:15:23.878 "uuid": "1f25db5a-4019-4031-8539-faace252675e", 00:15:23.878 "is_configured": true, 00:15:23.878 "data_offset": 2048, 00:15:23.878 "data_size": 63488 00:15:23.878 }, 00:15:23.878 { 00:15:23.878 "name": "BaseBdev4", 00:15:23.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.878 "is_configured": false, 00:15:23.878 "data_offset": 0, 00:15:23.878 "data_size": 0 00:15:23.878 } 00:15:23.878 ] 00:15:23.878 }' 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.878 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.443 [2024-11-27 04:37:11.845366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:24.443 BaseBdev4 00:15:24.443 [2024-11-27 04:37:11.846019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:24.443 [2024-11-27 04:37:11.846056] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:24.443 [2024-11-27 04:37:11.846450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:24.443 [2024-11-27 04:37:11.846687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:24.443 [2024-11-27 04:37:11.846711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:24.443 [2024-11-27 04:37:11.846955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.443 [ 00:15:24.443 { 00:15:24.443 "name": "BaseBdev4", 00:15:24.443 "aliases": [ 00:15:24.443 "e872b59a-625f-4946-a1a1-b6a560779e83" 00:15:24.443 ], 00:15:24.443 "product_name": "Malloc disk", 00:15:24.443 "block_size": 512, 00:15:24.443 "num_blocks": 65536, 00:15:24.443 "uuid": "e872b59a-625f-4946-a1a1-b6a560779e83", 00:15:24.443 "assigned_rate_limits": { 00:15:24.443 "rw_ios_per_sec": 0, 00:15:24.443 "rw_mbytes_per_sec": 0, 00:15:24.443 "r_mbytes_per_sec": 0, 00:15:24.443 "w_mbytes_per_sec": 0 00:15:24.443 }, 00:15:24.443 "claimed": true, 00:15:24.443 "claim_type": "exclusive_write", 00:15:24.443 "zoned": false, 00:15:24.443 "supported_io_types": { 00:15:24.443 "read": true, 00:15:24.443 "write": true, 00:15:24.443 "unmap": true, 00:15:24.443 "flush": true, 00:15:24.443 "reset": true, 00:15:24.443 "nvme_admin": false, 00:15:24.443 "nvme_io": false, 00:15:24.443 "nvme_io_md": false, 00:15:24.443 "write_zeroes": true, 00:15:24.443 "zcopy": true, 00:15:24.443 "get_zone_info": false, 00:15:24.443 "zone_management": false, 00:15:24.443 "zone_append": false, 00:15:24.443 "compare": false, 00:15:24.443 "compare_and_write": false, 00:15:24.443 "abort": true, 00:15:24.443 "seek_hole": false, 00:15:24.443 "seek_data": false, 00:15:24.443 "copy": true, 00:15:24.443 "nvme_iov_md": false 00:15:24.443 }, 00:15:24.443 "memory_domains": [ 00:15:24.443 { 00:15:24.443 "dma_device_id": "system", 00:15:24.443 "dma_device_type": 1 00:15:24.443 }, 00:15:24.443 { 00:15:24.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.443 "dma_device_type": 2 00:15:24.443 } 00:15:24.443 ], 00:15:24.443 "driver_specific": {} 00:15:24.443 } 00:15:24.443 ] 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.443 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.443 "name": "Existed_Raid", 00:15:24.444 "uuid": "cf832a69-5255-4412-9eea-813ac1fac7af", 00:15:24.444 "strip_size_kb": 64, 00:15:24.444 "state": "online", 00:15:24.444 "raid_level": "raid0", 00:15:24.444 "superblock": true, 00:15:24.444 "num_base_bdevs": 4, 00:15:24.444 "num_base_bdevs_discovered": 4, 00:15:24.444 "num_base_bdevs_operational": 4, 00:15:24.444 "base_bdevs_list": [ 00:15:24.444 { 00:15:24.444 "name": "BaseBdev1", 00:15:24.444 "uuid": "f5861720-5587-4a0b-ad5a-8ec8b5a7adee", 00:15:24.444 "is_configured": true, 00:15:24.444 "data_offset": 2048, 00:15:24.444 "data_size": 63488 00:15:24.444 }, 00:15:24.444 { 00:15:24.444 "name": "BaseBdev2", 00:15:24.444 "uuid": "d04a431e-bede-4138-a8e8-f0599ea5b801", 00:15:24.444 "is_configured": true, 00:15:24.444 "data_offset": 2048, 00:15:24.444 "data_size": 63488 00:15:24.444 }, 00:15:24.444 { 00:15:24.444 "name": "BaseBdev3", 00:15:24.444 "uuid": "1f25db5a-4019-4031-8539-faace252675e", 00:15:24.444 "is_configured": true, 00:15:24.444 "data_offset": 2048, 00:15:24.444 "data_size": 63488 00:15:24.444 }, 00:15:24.444 { 00:15:24.444 "name": "BaseBdev4", 00:15:24.444 "uuid": "e872b59a-625f-4946-a1a1-b6a560779e83", 00:15:24.444 "is_configured": true, 00:15:24.444 "data_offset": 2048, 00:15:24.444 "data_size": 63488 00:15:24.444 } 00:15:24.444 ] 00:15:24.444 }' 00:15:24.444 04:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.444 04:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.011 [2024-11-27 04:37:12.370021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.011 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:25.011 "name": "Existed_Raid", 00:15:25.011 "aliases": [ 00:15:25.011 "cf832a69-5255-4412-9eea-813ac1fac7af" 00:15:25.011 ], 00:15:25.011 "product_name": "Raid Volume", 00:15:25.011 "block_size": 512, 00:15:25.011 "num_blocks": 253952, 00:15:25.011 "uuid": "cf832a69-5255-4412-9eea-813ac1fac7af", 00:15:25.011 "assigned_rate_limits": { 00:15:25.011 "rw_ios_per_sec": 0, 00:15:25.012 "rw_mbytes_per_sec": 0, 00:15:25.012 "r_mbytes_per_sec": 0, 00:15:25.012 "w_mbytes_per_sec": 0 00:15:25.012 }, 00:15:25.012 "claimed": false, 00:15:25.012 "zoned": false, 00:15:25.012 "supported_io_types": { 00:15:25.012 "read": true, 00:15:25.012 "write": true, 00:15:25.012 "unmap": true, 00:15:25.012 "flush": true, 00:15:25.012 "reset": true, 00:15:25.012 "nvme_admin": false, 00:15:25.012 "nvme_io": false, 00:15:25.012 "nvme_io_md": false, 00:15:25.012 "write_zeroes": true, 00:15:25.012 "zcopy": false, 00:15:25.012 "get_zone_info": false, 00:15:25.012 "zone_management": false, 00:15:25.012 "zone_append": false, 00:15:25.012 "compare": false, 00:15:25.012 "compare_and_write": false, 00:15:25.012 "abort": false, 00:15:25.012 "seek_hole": false, 00:15:25.012 "seek_data": false, 00:15:25.012 "copy": false, 00:15:25.012 "nvme_iov_md": false 00:15:25.012 }, 00:15:25.012 "memory_domains": [ 00:15:25.012 { 00:15:25.012 "dma_device_id": "system", 00:15:25.012 "dma_device_type": 1 00:15:25.012 }, 00:15:25.012 { 00:15:25.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.012 "dma_device_type": 2 00:15:25.012 }, 00:15:25.012 { 00:15:25.012 "dma_device_id": "system", 00:15:25.012 "dma_device_type": 1 00:15:25.012 }, 00:15:25.012 { 00:15:25.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.012 "dma_device_type": 2 00:15:25.012 }, 00:15:25.012 { 00:15:25.012 "dma_device_id": "system", 00:15:25.012 "dma_device_type": 1 00:15:25.012 }, 00:15:25.012 { 00:15:25.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.012 "dma_device_type": 2 00:15:25.012 }, 00:15:25.012 { 00:15:25.012 "dma_device_id": "system", 00:15:25.012 "dma_device_type": 1 00:15:25.012 }, 00:15:25.012 { 00:15:25.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.012 "dma_device_type": 2 00:15:25.012 } 00:15:25.012 ], 00:15:25.012 "driver_specific": { 00:15:25.012 "raid": { 00:15:25.012 "uuid": "cf832a69-5255-4412-9eea-813ac1fac7af", 00:15:25.012 "strip_size_kb": 64, 00:15:25.012 "state": "online", 00:15:25.012 "raid_level": "raid0", 00:15:25.012 "superblock": true, 00:15:25.012 "num_base_bdevs": 4, 00:15:25.012 "num_base_bdevs_discovered": 4, 00:15:25.012 "num_base_bdevs_operational": 4, 00:15:25.012 "base_bdevs_list": [ 00:15:25.012 { 00:15:25.012 "name": "BaseBdev1", 00:15:25.012 "uuid": "f5861720-5587-4a0b-ad5a-8ec8b5a7adee", 00:15:25.012 "is_configured": true, 00:15:25.012 "data_offset": 2048, 00:15:25.012 "data_size": 63488 00:15:25.012 }, 00:15:25.012 { 00:15:25.012 "name": "BaseBdev2", 00:15:25.012 "uuid": "d04a431e-bede-4138-a8e8-f0599ea5b801", 00:15:25.012 "is_configured": true, 00:15:25.012 "data_offset": 2048, 00:15:25.012 "data_size": 63488 00:15:25.012 }, 00:15:25.012 { 00:15:25.012 "name": "BaseBdev3", 00:15:25.012 "uuid": "1f25db5a-4019-4031-8539-faace252675e", 00:15:25.012 "is_configured": true, 00:15:25.012 "data_offset": 2048, 00:15:25.012 "data_size": 63488 00:15:25.012 }, 00:15:25.012 { 00:15:25.012 "name": "BaseBdev4", 00:15:25.012 "uuid": "e872b59a-625f-4946-a1a1-b6a560779e83", 00:15:25.012 "is_configured": true, 00:15:25.012 "data_offset": 2048, 00:15:25.012 "data_size": 63488 00:15:25.012 } 00:15:25.012 ] 00:15:25.012 } 00:15:25.012 } 00:15:25.012 }' 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:25.012 BaseBdev2 00:15:25.012 BaseBdev3 00:15:25.012 BaseBdev4' 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.012 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.272 [2024-11-27 04:37:12.734641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.272 [2024-11-27 04:37:12.734817] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.272 [2024-11-27 04:37:12.734999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.272 "name": "Existed_Raid", 00:15:25.272 "uuid": "cf832a69-5255-4412-9eea-813ac1fac7af", 00:15:25.272 "strip_size_kb": 64, 00:15:25.272 "state": "offline", 00:15:25.272 "raid_level": "raid0", 00:15:25.272 "superblock": true, 00:15:25.272 "num_base_bdevs": 4, 00:15:25.272 "num_base_bdevs_discovered": 3, 00:15:25.272 "num_base_bdevs_operational": 3, 00:15:25.272 "base_bdevs_list": [ 00:15:25.272 { 00:15:25.272 "name": null, 00:15:25.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.272 "is_configured": false, 00:15:25.272 "data_offset": 0, 00:15:25.272 "data_size": 63488 00:15:25.272 }, 00:15:25.272 { 00:15:25.272 "name": "BaseBdev2", 00:15:25.272 "uuid": "d04a431e-bede-4138-a8e8-f0599ea5b801", 00:15:25.272 "is_configured": true, 00:15:25.272 "data_offset": 2048, 00:15:25.272 "data_size": 63488 00:15:25.272 }, 00:15:25.272 { 00:15:25.272 "name": "BaseBdev3", 00:15:25.272 "uuid": "1f25db5a-4019-4031-8539-faace252675e", 00:15:25.272 "is_configured": true, 00:15:25.272 "data_offset": 2048, 00:15:25.272 "data_size": 63488 00:15:25.272 }, 00:15:25.272 { 00:15:25.272 "name": "BaseBdev4", 00:15:25.272 "uuid": "e872b59a-625f-4946-a1a1-b6a560779e83", 00:15:25.272 "is_configured": true, 00:15:25.272 "data_offset": 2048, 00:15:25.272 "data_size": 63488 00:15:25.272 } 00:15:25.272 ] 00:15:25.272 }' 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.272 04:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.838 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.838 [2024-11-27 04:37:13.377979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.097 [2024-11-27 04:37:13.523167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.097 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.097 [2024-11-27 04:37:13.690366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:26.097 [2024-11-27 04:37:13.690550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.357 BaseBdev2 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.357 [ 00:15:26.357 { 00:15:26.357 "name": "BaseBdev2", 00:15:26.357 "aliases": [ 00:15:26.357 "7e821e58-a73f-43b5-a62f-b42bae54e054" 00:15:26.357 ], 00:15:26.357 "product_name": "Malloc disk", 00:15:26.357 "block_size": 512, 00:15:26.357 "num_blocks": 65536, 00:15:26.357 "uuid": "7e821e58-a73f-43b5-a62f-b42bae54e054", 00:15:26.357 "assigned_rate_limits": { 00:15:26.357 "rw_ios_per_sec": 0, 00:15:26.357 "rw_mbytes_per_sec": 0, 00:15:26.357 "r_mbytes_per_sec": 0, 00:15:26.357 "w_mbytes_per_sec": 0 00:15:26.357 }, 00:15:26.357 "claimed": false, 00:15:26.357 "zoned": false, 00:15:26.357 "supported_io_types": { 00:15:26.357 "read": true, 00:15:26.357 "write": true, 00:15:26.357 "unmap": true, 00:15:26.357 "flush": true, 00:15:26.357 "reset": true, 00:15:26.357 "nvme_admin": false, 00:15:26.357 "nvme_io": false, 00:15:26.357 "nvme_io_md": false, 00:15:26.357 "write_zeroes": true, 00:15:26.357 "zcopy": true, 00:15:26.357 "get_zone_info": false, 00:15:26.357 "zone_management": false, 00:15:26.357 "zone_append": false, 00:15:26.357 "compare": false, 00:15:26.357 "compare_and_write": false, 00:15:26.357 "abort": true, 00:15:26.357 "seek_hole": false, 00:15:26.357 "seek_data": false, 00:15:26.357 "copy": true, 00:15:26.357 "nvme_iov_md": false 00:15:26.357 }, 00:15:26.357 "memory_domains": [ 00:15:26.357 { 00:15:26.357 "dma_device_id": "system", 00:15:26.357 "dma_device_type": 1 00:15:26.357 }, 00:15:26.357 { 00:15:26.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.357 "dma_device_type": 2 00:15:26.357 } 00:15:26.357 ], 00:15:26.357 "driver_specific": {} 00:15:26.357 } 00:15:26.357 ] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.357 BaseBdev3 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.357 [ 00:15:26.357 { 00:15:26.357 "name": "BaseBdev3", 00:15:26.357 "aliases": [ 00:15:26.357 "389f4262-012c-42fd-989d-80ed010f469b" 00:15:26.357 ], 00:15:26.357 "product_name": "Malloc disk", 00:15:26.357 "block_size": 512, 00:15:26.357 "num_blocks": 65536, 00:15:26.357 "uuid": "389f4262-012c-42fd-989d-80ed010f469b", 00:15:26.357 "assigned_rate_limits": { 00:15:26.357 "rw_ios_per_sec": 0, 00:15:26.357 "rw_mbytes_per_sec": 0, 00:15:26.357 "r_mbytes_per_sec": 0, 00:15:26.357 "w_mbytes_per_sec": 0 00:15:26.357 }, 00:15:26.357 "claimed": false, 00:15:26.357 "zoned": false, 00:15:26.357 "supported_io_types": { 00:15:26.357 "read": true, 00:15:26.357 "write": true, 00:15:26.357 "unmap": true, 00:15:26.357 "flush": true, 00:15:26.357 "reset": true, 00:15:26.357 "nvme_admin": false, 00:15:26.357 "nvme_io": false, 00:15:26.357 "nvme_io_md": false, 00:15:26.357 "write_zeroes": true, 00:15:26.357 "zcopy": true, 00:15:26.357 "get_zone_info": false, 00:15:26.357 "zone_management": false, 00:15:26.357 "zone_append": false, 00:15:26.357 "compare": false, 00:15:26.357 "compare_and_write": false, 00:15:26.357 "abort": true, 00:15:26.357 "seek_hole": false, 00:15:26.357 "seek_data": false, 00:15:26.357 "copy": true, 00:15:26.357 "nvme_iov_md": false 00:15:26.357 }, 00:15:26.357 "memory_domains": [ 00:15:26.357 { 00:15:26.357 "dma_device_id": "system", 00:15:26.357 "dma_device_type": 1 00:15:26.357 }, 00:15:26.357 { 00:15:26.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.357 "dma_device_type": 2 00:15:26.357 } 00:15:26.357 ], 00:15:26.357 "driver_specific": {} 00:15:26.357 } 00:15:26.357 ] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.357 04:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.617 BaseBdev4 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.617 [ 00:15:26.617 { 00:15:26.617 "name": "BaseBdev4", 00:15:26.617 "aliases": [ 00:15:26.617 "1a8a56f1-6b7c-410d-b02f-94beb2aaa632" 00:15:26.617 ], 00:15:26.617 "product_name": "Malloc disk", 00:15:26.617 "block_size": 512, 00:15:26.617 "num_blocks": 65536, 00:15:26.617 "uuid": "1a8a56f1-6b7c-410d-b02f-94beb2aaa632", 00:15:26.617 "assigned_rate_limits": { 00:15:26.617 "rw_ios_per_sec": 0, 00:15:26.617 "rw_mbytes_per_sec": 0, 00:15:26.617 "r_mbytes_per_sec": 0, 00:15:26.617 "w_mbytes_per_sec": 0 00:15:26.617 }, 00:15:26.617 "claimed": false, 00:15:26.617 "zoned": false, 00:15:26.617 "supported_io_types": { 00:15:26.617 "read": true, 00:15:26.617 "write": true, 00:15:26.617 "unmap": true, 00:15:26.617 "flush": true, 00:15:26.617 "reset": true, 00:15:26.617 "nvme_admin": false, 00:15:26.617 "nvme_io": false, 00:15:26.617 "nvme_io_md": false, 00:15:26.617 "write_zeroes": true, 00:15:26.617 "zcopy": true, 00:15:26.617 "get_zone_info": false, 00:15:26.617 "zone_management": false, 00:15:26.617 "zone_append": false, 00:15:26.617 "compare": false, 00:15:26.617 "compare_and_write": false, 00:15:26.617 "abort": true, 00:15:26.617 "seek_hole": false, 00:15:26.617 "seek_data": false, 00:15:26.617 "copy": true, 00:15:26.617 "nvme_iov_md": false 00:15:26.617 }, 00:15:26.617 "memory_domains": [ 00:15:26.617 { 00:15:26.617 "dma_device_id": "system", 00:15:26.617 "dma_device_type": 1 00:15:26.617 }, 00:15:26.617 { 00:15:26.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.617 "dma_device_type": 2 00:15:26.617 } 00:15:26.617 ], 00:15:26.617 "driver_specific": {} 00:15:26.617 } 00:15:26.617 ] 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:26.617 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.618 [2024-11-27 04:37:14.050748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.618 [2024-11-27 04:37:14.050936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.618 [2024-11-27 04:37:14.051067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.618 [2024-11-27 04:37:14.053679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.618 [2024-11-27 04:37:14.053893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.618 "name": "Existed_Raid", 00:15:26.618 "uuid": "b427823f-4e34-410c-9f9f-6cab99cba70d", 00:15:26.618 "strip_size_kb": 64, 00:15:26.618 "state": "configuring", 00:15:26.618 "raid_level": "raid0", 00:15:26.618 "superblock": true, 00:15:26.618 "num_base_bdevs": 4, 00:15:26.618 "num_base_bdevs_discovered": 3, 00:15:26.618 "num_base_bdevs_operational": 4, 00:15:26.618 "base_bdevs_list": [ 00:15:26.618 { 00:15:26.618 "name": "BaseBdev1", 00:15:26.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.618 "is_configured": false, 00:15:26.618 "data_offset": 0, 00:15:26.618 "data_size": 0 00:15:26.618 }, 00:15:26.618 { 00:15:26.618 "name": "BaseBdev2", 00:15:26.618 "uuid": "7e821e58-a73f-43b5-a62f-b42bae54e054", 00:15:26.618 "is_configured": true, 00:15:26.618 "data_offset": 2048, 00:15:26.618 "data_size": 63488 00:15:26.618 }, 00:15:26.618 { 00:15:26.618 "name": "BaseBdev3", 00:15:26.618 "uuid": "389f4262-012c-42fd-989d-80ed010f469b", 00:15:26.618 "is_configured": true, 00:15:26.618 "data_offset": 2048, 00:15:26.618 "data_size": 63488 00:15:26.618 }, 00:15:26.618 { 00:15:26.618 "name": "BaseBdev4", 00:15:26.618 "uuid": "1a8a56f1-6b7c-410d-b02f-94beb2aaa632", 00:15:26.618 "is_configured": true, 00:15:26.618 "data_offset": 2048, 00:15:26.618 "data_size": 63488 00:15:26.618 } 00:15:26.618 ] 00:15:26.618 }' 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.618 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.235 [2024-11-27 04:37:14.598966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.235 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.236 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.236 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.236 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.236 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.236 "name": "Existed_Raid", 00:15:27.236 "uuid": "b427823f-4e34-410c-9f9f-6cab99cba70d", 00:15:27.236 "strip_size_kb": 64, 00:15:27.236 "state": "configuring", 00:15:27.236 "raid_level": "raid0", 00:15:27.236 "superblock": true, 00:15:27.236 "num_base_bdevs": 4, 00:15:27.236 "num_base_bdevs_discovered": 2, 00:15:27.236 "num_base_bdevs_operational": 4, 00:15:27.236 "base_bdevs_list": [ 00:15:27.236 { 00:15:27.236 "name": "BaseBdev1", 00:15:27.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.236 "is_configured": false, 00:15:27.236 "data_offset": 0, 00:15:27.236 "data_size": 0 00:15:27.236 }, 00:15:27.236 { 00:15:27.236 "name": null, 00:15:27.236 "uuid": "7e821e58-a73f-43b5-a62f-b42bae54e054", 00:15:27.236 "is_configured": false, 00:15:27.236 "data_offset": 0, 00:15:27.236 "data_size": 63488 00:15:27.236 }, 00:15:27.236 { 00:15:27.236 "name": "BaseBdev3", 00:15:27.236 "uuid": "389f4262-012c-42fd-989d-80ed010f469b", 00:15:27.236 "is_configured": true, 00:15:27.236 "data_offset": 2048, 00:15:27.236 "data_size": 63488 00:15:27.236 }, 00:15:27.236 { 00:15:27.236 "name": "BaseBdev4", 00:15:27.236 "uuid": "1a8a56f1-6b7c-410d-b02f-94beb2aaa632", 00:15:27.236 "is_configured": true, 00:15:27.236 "data_offset": 2048, 00:15:27.236 "data_size": 63488 00:15:27.236 } 00:15:27.236 ] 00:15:27.236 }' 00:15:27.236 04:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.236 04:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.510 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:27.510 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.510 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.510 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.510 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.768 [2024-11-27 04:37:15.193166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.768 BaseBdev1 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.768 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.768 [ 00:15:27.768 { 00:15:27.768 "name": "BaseBdev1", 00:15:27.768 "aliases": [ 00:15:27.768 "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5" 00:15:27.768 ], 00:15:27.768 "product_name": "Malloc disk", 00:15:27.768 "block_size": 512, 00:15:27.768 "num_blocks": 65536, 00:15:27.768 "uuid": "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5", 00:15:27.768 "assigned_rate_limits": { 00:15:27.768 "rw_ios_per_sec": 0, 00:15:27.768 "rw_mbytes_per_sec": 0, 00:15:27.768 "r_mbytes_per_sec": 0, 00:15:27.768 "w_mbytes_per_sec": 0 00:15:27.768 }, 00:15:27.768 "claimed": true, 00:15:27.768 "claim_type": "exclusive_write", 00:15:27.768 "zoned": false, 00:15:27.768 "supported_io_types": { 00:15:27.768 "read": true, 00:15:27.768 "write": true, 00:15:27.768 "unmap": true, 00:15:27.768 "flush": true, 00:15:27.768 "reset": true, 00:15:27.768 "nvme_admin": false, 00:15:27.768 "nvme_io": false, 00:15:27.768 "nvme_io_md": false, 00:15:27.768 "write_zeroes": true, 00:15:27.768 "zcopy": true, 00:15:27.768 "get_zone_info": false, 00:15:27.768 "zone_management": false, 00:15:27.768 "zone_append": false, 00:15:27.768 "compare": false, 00:15:27.768 "compare_and_write": false, 00:15:27.768 "abort": true, 00:15:27.768 "seek_hole": false, 00:15:27.768 "seek_data": false, 00:15:27.768 "copy": true, 00:15:27.768 "nvme_iov_md": false 00:15:27.768 }, 00:15:27.768 "memory_domains": [ 00:15:27.768 { 00:15:27.768 "dma_device_id": "system", 00:15:27.768 "dma_device_type": 1 00:15:27.768 }, 00:15:27.768 { 00:15:27.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.768 "dma_device_type": 2 00:15:27.768 } 00:15:27.768 ], 00:15:27.768 "driver_specific": {} 00:15:27.768 } 00:15:27.769 ] 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.769 "name": "Existed_Raid", 00:15:27.769 "uuid": "b427823f-4e34-410c-9f9f-6cab99cba70d", 00:15:27.769 "strip_size_kb": 64, 00:15:27.769 "state": "configuring", 00:15:27.769 "raid_level": "raid0", 00:15:27.769 "superblock": true, 00:15:27.769 "num_base_bdevs": 4, 00:15:27.769 "num_base_bdevs_discovered": 3, 00:15:27.769 "num_base_bdevs_operational": 4, 00:15:27.769 "base_bdevs_list": [ 00:15:27.769 { 00:15:27.769 "name": "BaseBdev1", 00:15:27.769 "uuid": "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5", 00:15:27.769 "is_configured": true, 00:15:27.769 "data_offset": 2048, 00:15:27.769 "data_size": 63488 00:15:27.769 }, 00:15:27.769 { 00:15:27.769 "name": null, 00:15:27.769 "uuid": "7e821e58-a73f-43b5-a62f-b42bae54e054", 00:15:27.769 "is_configured": false, 00:15:27.769 "data_offset": 0, 00:15:27.769 "data_size": 63488 00:15:27.769 }, 00:15:27.769 { 00:15:27.769 "name": "BaseBdev3", 00:15:27.769 "uuid": "389f4262-012c-42fd-989d-80ed010f469b", 00:15:27.769 "is_configured": true, 00:15:27.769 "data_offset": 2048, 00:15:27.769 "data_size": 63488 00:15:27.769 }, 00:15:27.769 { 00:15:27.769 "name": "BaseBdev4", 00:15:27.769 "uuid": "1a8a56f1-6b7c-410d-b02f-94beb2aaa632", 00:15:27.769 "is_configured": true, 00:15:27.769 "data_offset": 2048, 00:15:27.769 "data_size": 63488 00:15:27.769 } 00:15:27.769 ] 00:15:27.769 }' 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.769 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.336 [2024-11-27 04:37:15.729440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.336 "name": "Existed_Raid", 00:15:28.336 "uuid": "b427823f-4e34-410c-9f9f-6cab99cba70d", 00:15:28.336 "strip_size_kb": 64, 00:15:28.336 "state": "configuring", 00:15:28.336 "raid_level": "raid0", 00:15:28.336 "superblock": true, 00:15:28.336 "num_base_bdevs": 4, 00:15:28.336 "num_base_bdevs_discovered": 2, 00:15:28.336 "num_base_bdevs_operational": 4, 00:15:28.336 "base_bdevs_list": [ 00:15:28.336 { 00:15:28.336 "name": "BaseBdev1", 00:15:28.336 "uuid": "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5", 00:15:28.336 "is_configured": true, 00:15:28.336 "data_offset": 2048, 00:15:28.336 "data_size": 63488 00:15:28.336 }, 00:15:28.336 { 00:15:28.336 "name": null, 00:15:28.336 "uuid": "7e821e58-a73f-43b5-a62f-b42bae54e054", 00:15:28.336 "is_configured": false, 00:15:28.336 "data_offset": 0, 00:15:28.336 "data_size": 63488 00:15:28.336 }, 00:15:28.336 { 00:15:28.336 "name": null, 00:15:28.336 "uuid": "389f4262-012c-42fd-989d-80ed010f469b", 00:15:28.336 "is_configured": false, 00:15:28.336 "data_offset": 0, 00:15:28.336 "data_size": 63488 00:15:28.336 }, 00:15:28.336 { 00:15:28.336 "name": "BaseBdev4", 00:15:28.336 "uuid": "1a8a56f1-6b7c-410d-b02f-94beb2aaa632", 00:15:28.336 "is_configured": true, 00:15:28.336 "data_offset": 2048, 00:15:28.336 "data_size": 63488 00:15:28.336 } 00:15:28.336 ] 00:15:28.336 }' 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.336 04:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.902 [2024-11-27 04:37:16.369606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.902 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.902 "name": "Existed_Raid", 00:15:28.902 "uuid": "b427823f-4e34-410c-9f9f-6cab99cba70d", 00:15:28.902 "strip_size_kb": 64, 00:15:28.902 "state": "configuring", 00:15:28.902 "raid_level": "raid0", 00:15:28.902 "superblock": true, 00:15:28.902 "num_base_bdevs": 4, 00:15:28.902 "num_base_bdevs_discovered": 3, 00:15:28.902 "num_base_bdevs_operational": 4, 00:15:28.902 "base_bdevs_list": [ 00:15:28.902 { 00:15:28.902 "name": "BaseBdev1", 00:15:28.902 "uuid": "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5", 00:15:28.902 "is_configured": true, 00:15:28.902 "data_offset": 2048, 00:15:28.902 "data_size": 63488 00:15:28.902 }, 00:15:28.902 { 00:15:28.902 "name": null, 00:15:28.902 "uuid": "7e821e58-a73f-43b5-a62f-b42bae54e054", 00:15:28.902 "is_configured": false, 00:15:28.902 "data_offset": 0, 00:15:28.902 "data_size": 63488 00:15:28.902 }, 00:15:28.902 { 00:15:28.902 "name": "BaseBdev3", 00:15:28.902 "uuid": "389f4262-012c-42fd-989d-80ed010f469b", 00:15:28.902 "is_configured": true, 00:15:28.902 "data_offset": 2048, 00:15:28.902 "data_size": 63488 00:15:28.902 }, 00:15:28.902 { 00:15:28.902 "name": "BaseBdev4", 00:15:28.902 "uuid": "1a8a56f1-6b7c-410d-b02f-94beb2aaa632", 00:15:28.902 "is_configured": true, 00:15:28.902 "data_offset": 2048, 00:15:28.902 "data_size": 63488 00:15:28.903 } 00:15:28.903 ] 00:15:28.903 }' 00:15:28.903 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.903 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.469 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.469 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.469 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.469 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.469 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.469 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:29.469 04:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:29.469 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.469 04:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.469 [2024-11-27 04:37:16.941834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.469 "name": "Existed_Raid", 00:15:29.469 "uuid": "b427823f-4e34-410c-9f9f-6cab99cba70d", 00:15:29.469 "strip_size_kb": 64, 00:15:29.469 "state": "configuring", 00:15:29.469 "raid_level": "raid0", 00:15:29.469 "superblock": true, 00:15:29.469 "num_base_bdevs": 4, 00:15:29.469 "num_base_bdevs_discovered": 2, 00:15:29.469 "num_base_bdevs_operational": 4, 00:15:29.469 "base_bdevs_list": [ 00:15:29.469 { 00:15:29.469 "name": null, 00:15:29.469 "uuid": "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5", 00:15:29.469 "is_configured": false, 00:15:29.469 "data_offset": 0, 00:15:29.469 "data_size": 63488 00:15:29.469 }, 00:15:29.469 { 00:15:29.469 "name": null, 00:15:29.469 "uuid": "7e821e58-a73f-43b5-a62f-b42bae54e054", 00:15:29.469 "is_configured": false, 00:15:29.469 "data_offset": 0, 00:15:29.469 "data_size": 63488 00:15:29.469 }, 00:15:29.469 { 00:15:29.469 "name": "BaseBdev3", 00:15:29.469 "uuid": "389f4262-012c-42fd-989d-80ed010f469b", 00:15:29.469 "is_configured": true, 00:15:29.469 "data_offset": 2048, 00:15:29.469 "data_size": 63488 00:15:29.469 }, 00:15:29.469 { 00:15:29.469 "name": "BaseBdev4", 00:15:29.469 "uuid": "1a8a56f1-6b7c-410d-b02f-94beb2aaa632", 00:15:29.469 "is_configured": true, 00:15:29.469 "data_offset": 2048, 00:15:29.469 "data_size": 63488 00:15:29.469 } 00:15:29.469 ] 00:15:29.469 }' 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.469 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.035 [2024-11-27 04:37:17.601401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.035 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.294 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.294 "name": "Existed_Raid", 00:15:30.294 "uuid": "b427823f-4e34-410c-9f9f-6cab99cba70d", 00:15:30.294 "strip_size_kb": 64, 00:15:30.294 "state": "configuring", 00:15:30.294 "raid_level": "raid0", 00:15:30.294 "superblock": true, 00:15:30.294 "num_base_bdevs": 4, 00:15:30.294 "num_base_bdevs_discovered": 3, 00:15:30.294 "num_base_bdevs_operational": 4, 00:15:30.294 "base_bdevs_list": [ 00:15:30.294 { 00:15:30.294 "name": null, 00:15:30.294 "uuid": "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5", 00:15:30.294 "is_configured": false, 00:15:30.294 "data_offset": 0, 00:15:30.294 "data_size": 63488 00:15:30.294 }, 00:15:30.294 { 00:15:30.294 "name": "BaseBdev2", 00:15:30.294 "uuid": "7e821e58-a73f-43b5-a62f-b42bae54e054", 00:15:30.294 "is_configured": true, 00:15:30.294 "data_offset": 2048, 00:15:30.294 "data_size": 63488 00:15:30.294 }, 00:15:30.294 { 00:15:30.294 "name": "BaseBdev3", 00:15:30.294 "uuid": "389f4262-012c-42fd-989d-80ed010f469b", 00:15:30.294 "is_configured": true, 00:15:30.294 "data_offset": 2048, 00:15:30.294 "data_size": 63488 00:15:30.294 }, 00:15:30.294 { 00:15:30.294 "name": "BaseBdev4", 00:15:30.294 "uuid": "1a8a56f1-6b7c-410d-b02f-94beb2aaa632", 00:15:30.294 "is_configured": true, 00:15:30.294 "data_offset": 2048, 00:15:30.294 "data_size": 63488 00:15:30.294 } 00:15:30.294 ] 00:15:30.294 }' 00:15:30.294 04:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.294 04:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.552 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.811 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f9cd4e7-186b-4ec9-a246-c1929ac56cd5 00:15:30.811 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.811 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.811 [2024-11-27 04:37:18.231968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:30.811 [2024-11-27 04:37:18.232474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:30.811 [2024-11-27 04:37:18.232498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:30.811 [2024-11-27 04:37:18.232845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:30.811 NewBaseBdev 00:15:30.811 [2024-11-27 04:37:18.233023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:30.811 [2024-11-27 04:37:18.233045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:30.811 [2024-11-27 04:37:18.233201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.811 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.811 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:30.811 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:30.811 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.811 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:30.811 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.811 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.812 [ 00:15:30.812 { 00:15:30.812 "name": "NewBaseBdev", 00:15:30.812 "aliases": [ 00:15:30.812 "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5" 00:15:30.812 ], 00:15:30.812 "product_name": "Malloc disk", 00:15:30.812 "block_size": 512, 00:15:30.812 "num_blocks": 65536, 00:15:30.812 "uuid": "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5", 00:15:30.812 "assigned_rate_limits": { 00:15:30.812 "rw_ios_per_sec": 0, 00:15:30.812 "rw_mbytes_per_sec": 0, 00:15:30.812 "r_mbytes_per_sec": 0, 00:15:30.812 "w_mbytes_per_sec": 0 00:15:30.812 }, 00:15:30.812 "claimed": true, 00:15:30.812 "claim_type": "exclusive_write", 00:15:30.812 "zoned": false, 00:15:30.812 "supported_io_types": { 00:15:30.812 "read": true, 00:15:30.812 "write": true, 00:15:30.812 "unmap": true, 00:15:30.812 "flush": true, 00:15:30.812 "reset": true, 00:15:30.812 "nvme_admin": false, 00:15:30.812 "nvme_io": false, 00:15:30.812 "nvme_io_md": false, 00:15:30.812 "write_zeroes": true, 00:15:30.812 "zcopy": true, 00:15:30.812 "get_zone_info": false, 00:15:30.812 "zone_management": false, 00:15:30.812 "zone_append": false, 00:15:30.812 "compare": false, 00:15:30.812 "compare_and_write": false, 00:15:30.812 "abort": true, 00:15:30.812 "seek_hole": false, 00:15:30.812 "seek_data": false, 00:15:30.812 "copy": true, 00:15:30.812 "nvme_iov_md": false 00:15:30.812 }, 00:15:30.812 "memory_domains": [ 00:15:30.812 { 00:15:30.812 "dma_device_id": "system", 00:15:30.812 "dma_device_type": 1 00:15:30.812 }, 00:15:30.812 { 00:15:30.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.812 "dma_device_type": 2 00:15:30.812 } 00:15:30.812 ], 00:15:30.812 "driver_specific": {} 00:15:30.812 } 00:15:30.812 ] 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.812 "name": "Existed_Raid", 00:15:30.812 "uuid": "b427823f-4e34-410c-9f9f-6cab99cba70d", 00:15:30.812 "strip_size_kb": 64, 00:15:30.812 "state": "online", 00:15:30.812 "raid_level": "raid0", 00:15:30.812 "superblock": true, 00:15:30.812 "num_base_bdevs": 4, 00:15:30.812 "num_base_bdevs_discovered": 4, 00:15:30.812 "num_base_bdevs_operational": 4, 00:15:30.812 "base_bdevs_list": [ 00:15:30.812 { 00:15:30.812 "name": "NewBaseBdev", 00:15:30.812 "uuid": "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5", 00:15:30.812 "is_configured": true, 00:15:30.812 "data_offset": 2048, 00:15:30.812 "data_size": 63488 00:15:30.812 }, 00:15:30.812 { 00:15:30.812 "name": "BaseBdev2", 00:15:30.812 "uuid": "7e821e58-a73f-43b5-a62f-b42bae54e054", 00:15:30.812 "is_configured": true, 00:15:30.812 "data_offset": 2048, 00:15:30.812 "data_size": 63488 00:15:30.812 }, 00:15:30.812 { 00:15:30.812 "name": "BaseBdev3", 00:15:30.812 "uuid": "389f4262-012c-42fd-989d-80ed010f469b", 00:15:30.812 "is_configured": true, 00:15:30.812 "data_offset": 2048, 00:15:30.812 "data_size": 63488 00:15:30.812 }, 00:15:30.812 { 00:15:30.812 "name": "BaseBdev4", 00:15:30.812 "uuid": "1a8a56f1-6b7c-410d-b02f-94beb2aaa632", 00:15:30.812 "is_configured": true, 00:15:30.812 "data_offset": 2048, 00:15:30.812 "data_size": 63488 00:15:30.812 } 00:15:30.812 ] 00:15:30.812 }' 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.812 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.379 [2024-11-27 04:37:18.772656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.379 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.379 "name": "Existed_Raid", 00:15:31.379 "aliases": [ 00:15:31.379 "b427823f-4e34-410c-9f9f-6cab99cba70d" 00:15:31.379 ], 00:15:31.379 "product_name": "Raid Volume", 00:15:31.379 "block_size": 512, 00:15:31.379 "num_blocks": 253952, 00:15:31.379 "uuid": "b427823f-4e34-410c-9f9f-6cab99cba70d", 00:15:31.379 "assigned_rate_limits": { 00:15:31.379 "rw_ios_per_sec": 0, 00:15:31.379 "rw_mbytes_per_sec": 0, 00:15:31.379 "r_mbytes_per_sec": 0, 00:15:31.379 "w_mbytes_per_sec": 0 00:15:31.379 }, 00:15:31.379 "claimed": false, 00:15:31.379 "zoned": false, 00:15:31.379 "supported_io_types": { 00:15:31.379 "read": true, 00:15:31.379 "write": true, 00:15:31.379 "unmap": true, 00:15:31.379 "flush": true, 00:15:31.379 "reset": true, 00:15:31.379 "nvme_admin": false, 00:15:31.379 "nvme_io": false, 00:15:31.379 "nvme_io_md": false, 00:15:31.379 "write_zeroes": true, 00:15:31.379 "zcopy": false, 00:15:31.379 "get_zone_info": false, 00:15:31.379 "zone_management": false, 00:15:31.379 "zone_append": false, 00:15:31.379 "compare": false, 00:15:31.379 "compare_and_write": false, 00:15:31.379 "abort": false, 00:15:31.379 "seek_hole": false, 00:15:31.379 "seek_data": false, 00:15:31.379 "copy": false, 00:15:31.379 "nvme_iov_md": false 00:15:31.379 }, 00:15:31.379 "memory_domains": [ 00:15:31.379 { 00:15:31.379 "dma_device_id": "system", 00:15:31.379 "dma_device_type": 1 00:15:31.379 }, 00:15:31.379 { 00:15:31.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.379 "dma_device_type": 2 00:15:31.379 }, 00:15:31.380 { 00:15:31.380 "dma_device_id": "system", 00:15:31.380 "dma_device_type": 1 00:15:31.380 }, 00:15:31.380 { 00:15:31.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.380 "dma_device_type": 2 00:15:31.380 }, 00:15:31.380 { 00:15:31.380 "dma_device_id": "system", 00:15:31.380 "dma_device_type": 1 00:15:31.380 }, 00:15:31.380 { 00:15:31.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.380 "dma_device_type": 2 00:15:31.380 }, 00:15:31.380 { 00:15:31.380 "dma_device_id": "system", 00:15:31.380 "dma_device_type": 1 00:15:31.380 }, 00:15:31.380 { 00:15:31.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.380 "dma_device_type": 2 00:15:31.380 } 00:15:31.380 ], 00:15:31.380 "driver_specific": { 00:15:31.380 "raid": { 00:15:31.380 "uuid": "b427823f-4e34-410c-9f9f-6cab99cba70d", 00:15:31.380 "strip_size_kb": 64, 00:15:31.380 "state": "online", 00:15:31.380 "raid_level": "raid0", 00:15:31.380 "superblock": true, 00:15:31.380 "num_base_bdevs": 4, 00:15:31.380 "num_base_bdevs_discovered": 4, 00:15:31.380 "num_base_bdevs_operational": 4, 00:15:31.380 "base_bdevs_list": [ 00:15:31.380 { 00:15:31.380 "name": "NewBaseBdev", 00:15:31.380 "uuid": "9f9cd4e7-186b-4ec9-a246-c1929ac56cd5", 00:15:31.380 "is_configured": true, 00:15:31.380 "data_offset": 2048, 00:15:31.380 "data_size": 63488 00:15:31.380 }, 00:15:31.380 { 00:15:31.380 "name": "BaseBdev2", 00:15:31.380 "uuid": "7e821e58-a73f-43b5-a62f-b42bae54e054", 00:15:31.380 "is_configured": true, 00:15:31.380 "data_offset": 2048, 00:15:31.380 "data_size": 63488 00:15:31.380 }, 00:15:31.380 { 00:15:31.380 "name": "BaseBdev3", 00:15:31.380 "uuid": "389f4262-012c-42fd-989d-80ed010f469b", 00:15:31.380 "is_configured": true, 00:15:31.380 "data_offset": 2048, 00:15:31.380 "data_size": 63488 00:15:31.380 }, 00:15:31.380 { 00:15:31.380 "name": "BaseBdev4", 00:15:31.380 "uuid": "1a8a56f1-6b7c-410d-b02f-94beb2aaa632", 00:15:31.380 "is_configured": true, 00:15:31.380 "data_offset": 2048, 00:15:31.380 "data_size": 63488 00:15:31.380 } 00:15:31.380 ] 00:15:31.380 } 00:15:31.380 } 00:15:31.380 }' 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:31.380 BaseBdev2 00:15:31.380 BaseBdev3 00:15:31.380 BaseBdev4' 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.380 04:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.638 [2024-11-27 04:37:19.128321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.638 [2024-11-27 04:37:19.128477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.638 [2024-11-27 04:37:19.128671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.638 [2024-11-27 04:37:19.128877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.638 [2024-11-27 04:37:19.129033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70262 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70262 ']' 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70262 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70262 00:15:31.638 killing process with pid 70262 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70262' 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70262 00:15:31.638 [2024-11-27 04:37:19.166125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.638 04:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70262 00:15:32.206 [2024-11-27 04:37:19.522229] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.198 ************************************ 00:15:33.198 END TEST raid_state_function_test_sb 00:15:33.198 ************************************ 00:15:33.198 04:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:33.198 00:15:33.198 real 0m12.675s 00:15:33.198 user 0m21.030s 00:15:33.198 sys 0m1.700s 00:15:33.198 04:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.198 04:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.198 04:37:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:15:33.198 04:37:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:33.198 04:37:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.198 04:37:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:33.198 ************************************ 00:15:33.198 START TEST raid_superblock_test 00:15:33.198 ************************************ 00:15:33.198 04:37:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:15:33.198 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:15:33.198 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:33.198 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:33.198 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:33.198 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70948 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70948 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70948 ']' 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.199 04:37:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.199 [2024-11-27 04:37:20.755542] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:15:33.199 [2024-11-27 04:37:20.755969] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70948 ] 00:15:33.465 [2024-11-27 04:37:20.934940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.465 [2024-11-27 04:37:21.064489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.723 [2024-11-27 04:37:21.269629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.723 [2024-11-27 04:37:21.269722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.291 malloc1 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.291 [2024-11-27 04:37:21.789055] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:34.291 [2024-11-27 04:37:21.789259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.291 [2024-11-27 04:37:21.789436] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:34.291 [2024-11-27 04:37:21.789554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.291 [2024-11-27 04:37:21.792495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.291 [2024-11-27 04:37:21.792657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:34.291 pt1 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.291 malloc2 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.291 [2024-11-27 04:37:21.845305] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.291 [2024-11-27 04:37:21.845492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.291 [2024-11-27 04:37:21.845540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:34.291 [2024-11-27 04:37:21.845556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.291 [2024-11-27 04:37:21.848302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.291 pt2 00:15:34.291 [2024-11-27 04:37:21.848454] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.291 malloc3 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.291 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.292 [2024-11-27 04:37:21.909236] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:34.292 [2024-11-27 04:37:21.909420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.292 [2024-11-27 04:37:21.909498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:34.292 [2024-11-27 04:37:21.909684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.551 [2024-11-27 04:37:21.912512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.551 [2024-11-27 04:37:21.912665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:34.551 pt3 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.551 malloc4 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.551 [2024-11-27 04:37:21.961404] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:34.551 [2024-11-27 04:37:21.961601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.551 [2024-11-27 04:37:21.961676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:34.551 [2024-11-27 04:37:21.961849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.551 [2024-11-27 04:37:21.964679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.551 [2024-11-27 04:37:21.964841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:34.551 pt4 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.551 [2024-11-27 04:37:21.973556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:34.551 [2024-11-27 04:37:21.976089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.551 [2024-11-27 04:37:21.976330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:34.551 [2024-11-27 04:37:21.976508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:34.551 [2024-11-27 04:37:21.976758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:34.551 [2024-11-27 04:37:21.976801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:34.551 [2024-11-27 04:37:21.977137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:34.551 [2024-11-27 04:37:21.977351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:34.551 [2024-11-27 04:37:21.977372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:34.551 [2024-11-27 04:37:21.977602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.551 04:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.551 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.551 "name": "raid_bdev1", 00:15:34.551 "uuid": "f72deb9c-0058-4e44-b1fa-2fb845174cac", 00:15:34.551 "strip_size_kb": 64, 00:15:34.551 "state": "online", 00:15:34.551 "raid_level": "raid0", 00:15:34.551 "superblock": true, 00:15:34.551 "num_base_bdevs": 4, 00:15:34.551 "num_base_bdevs_discovered": 4, 00:15:34.551 "num_base_bdevs_operational": 4, 00:15:34.551 "base_bdevs_list": [ 00:15:34.551 { 00:15:34.551 "name": "pt1", 00:15:34.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.551 "is_configured": true, 00:15:34.551 "data_offset": 2048, 00:15:34.551 "data_size": 63488 00:15:34.551 }, 00:15:34.551 { 00:15:34.551 "name": "pt2", 00:15:34.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.551 "is_configured": true, 00:15:34.551 "data_offset": 2048, 00:15:34.551 "data_size": 63488 00:15:34.551 }, 00:15:34.551 { 00:15:34.551 "name": "pt3", 00:15:34.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.551 "is_configured": true, 00:15:34.551 "data_offset": 2048, 00:15:34.551 "data_size": 63488 00:15:34.551 }, 00:15:34.551 { 00:15:34.551 "name": "pt4", 00:15:34.551 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:34.551 "is_configured": true, 00:15:34.551 "data_offset": 2048, 00:15:34.551 "data_size": 63488 00:15:34.551 } 00:15:34.551 ] 00:15:34.551 }' 00:15:34.551 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.551 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.810 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:34.810 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:34.810 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.810 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.810 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.810 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.810 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.810 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.810 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.810 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.810 [2024-11-27 04:37:22.422111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.069 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.069 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:35.069 "name": "raid_bdev1", 00:15:35.069 "aliases": [ 00:15:35.069 "f72deb9c-0058-4e44-b1fa-2fb845174cac" 00:15:35.069 ], 00:15:35.069 "product_name": "Raid Volume", 00:15:35.069 "block_size": 512, 00:15:35.069 "num_blocks": 253952, 00:15:35.069 "uuid": "f72deb9c-0058-4e44-b1fa-2fb845174cac", 00:15:35.069 "assigned_rate_limits": { 00:15:35.069 "rw_ios_per_sec": 0, 00:15:35.069 "rw_mbytes_per_sec": 0, 00:15:35.069 "r_mbytes_per_sec": 0, 00:15:35.069 "w_mbytes_per_sec": 0 00:15:35.069 }, 00:15:35.069 "claimed": false, 00:15:35.069 "zoned": false, 00:15:35.069 "supported_io_types": { 00:15:35.069 "read": true, 00:15:35.069 "write": true, 00:15:35.069 "unmap": true, 00:15:35.069 "flush": true, 00:15:35.069 "reset": true, 00:15:35.069 "nvme_admin": false, 00:15:35.069 "nvme_io": false, 00:15:35.069 "nvme_io_md": false, 00:15:35.069 "write_zeroes": true, 00:15:35.069 "zcopy": false, 00:15:35.069 "get_zone_info": false, 00:15:35.069 "zone_management": false, 00:15:35.069 "zone_append": false, 00:15:35.069 "compare": false, 00:15:35.069 "compare_and_write": false, 00:15:35.069 "abort": false, 00:15:35.069 "seek_hole": false, 00:15:35.069 "seek_data": false, 00:15:35.069 "copy": false, 00:15:35.069 "nvme_iov_md": false 00:15:35.069 }, 00:15:35.069 "memory_domains": [ 00:15:35.069 { 00:15:35.069 "dma_device_id": "system", 00:15:35.069 "dma_device_type": 1 00:15:35.069 }, 00:15:35.069 { 00:15:35.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.069 "dma_device_type": 2 00:15:35.069 }, 00:15:35.069 { 00:15:35.069 "dma_device_id": "system", 00:15:35.069 "dma_device_type": 1 00:15:35.069 }, 00:15:35.069 { 00:15:35.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.069 "dma_device_type": 2 00:15:35.069 }, 00:15:35.069 { 00:15:35.069 "dma_device_id": "system", 00:15:35.069 "dma_device_type": 1 00:15:35.069 }, 00:15:35.069 { 00:15:35.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.069 "dma_device_type": 2 00:15:35.069 }, 00:15:35.069 { 00:15:35.069 "dma_device_id": "system", 00:15:35.069 "dma_device_type": 1 00:15:35.069 }, 00:15:35.069 { 00:15:35.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.069 "dma_device_type": 2 00:15:35.069 } 00:15:35.069 ], 00:15:35.069 "driver_specific": { 00:15:35.069 "raid": { 00:15:35.069 "uuid": "f72deb9c-0058-4e44-b1fa-2fb845174cac", 00:15:35.069 "strip_size_kb": 64, 00:15:35.069 "state": "online", 00:15:35.069 "raid_level": "raid0", 00:15:35.069 "superblock": true, 00:15:35.069 "num_base_bdevs": 4, 00:15:35.069 "num_base_bdevs_discovered": 4, 00:15:35.069 "num_base_bdevs_operational": 4, 00:15:35.069 "base_bdevs_list": [ 00:15:35.069 { 00:15:35.069 "name": "pt1", 00:15:35.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.069 "is_configured": true, 00:15:35.069 "data_offset": 2048, 00:15:35.069 "data_size": 63488 00:15:35.069 }, 00:15:35.069 { 00:15:35.069 "name": "pt2", 00:15:35.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.069 "is_configured": true, 00:15:35.069 "data_offset": 2048, 00:15:35.069 "data_size": 63488 00:15:35.069 }, 00:15:35.069 { 00:15:35.069 "name": "pt3", 00:15:35.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.069 "is_configured": true, 00:15:35.069 "data_offset": 2048, 00:15:35.069 "data_size": 63488 00:15:35.069 }, 00:15:35.069 { 00:15:35.069 "name": "pt4", 00:15:35.069 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:35.069 "is_configured": true, 00:15:35.069 "data_offset": 2048, 00:15:35.069 "data_size": 63488 00:15:35.070 } 00:15:35.070 ] 00:15:35.070 } 00:15:35.070 } 00:15:35.070 }' 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:35.070 pt2 00:15:35.070 pt3 00:15:35.070 pt4' 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.070 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.329 [2024-11-27 04:37:22.778186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f72deb9c-0058-4e44-b1fa-2fb845174cac 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f72deb9c-0058-4e44-b1fa-2fb845174cac ']' 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.329 [2024-11-27 04:37:22.821803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.329 [2024-11-27 04:37:22.821945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.329 [2024-11-27 04:37:22.822135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.329 [2024-11-27 04:37:22.822322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.329 [2024-11-27 04:37:22.822481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:35.329 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.588 [2024-11-27 04:37:22.969918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:35.588 [2024-11-27 04:37:22.972512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:35.588 [2024-11-27 04:37:22.972576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:35.588 [2024-11-27 04:37:22.972627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:35.588 [2024-11-27 04:37:22.972730] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:35.588 [2024-11-27 04:37:22.972818] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:35.588 [2024-11-27 04:37:22.972853] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:35.588 [2024-11-27 04:37:22.972884] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:35.588 [2024-11-27 04:37:22.972906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.588 [2024-11-27 04:37:22.972925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:35.588 request: 00:15:35.588 { 00:15:35.588 "name": "raid_bdev1", 00:15:35.588 "raid_level": "raid0", 00:15:35.588 "base_bdevs": [ 00:15:35.588 "malloc1", 00:15:35.588 "malloc2", 00:15:35.588 "malloc3", 00:15:35.588 "malloc4" 00:15:35.588 ], 00:15:35.588 "strip_size_kb": 64, 00:15:35.588 "superblock": false, 00:15:35.588 "method": "bdev_raid_create", 00:15:35.588 "req_id": 1 00:15:35.588 } 00:15:35.588 Got JSON-RPC error response 00:15:35.588 response: 00:15:35.588 { 00:15:35.588 "code": -17, 00:15:35.588 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:35.588 } 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.588 04:37:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.588 [2024-11-27 04:37:23.037903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.588 [2024-11-27 04:37:23.038083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.588 [2024-11-27 04:37:23.038156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:35.588 [2024-11-27 04:37:23.038264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.588 [2024-11-27 04:37:23.041223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.588 [2024-11-27 04:37:23.041399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.588 [2024-11-27 04:37:23.041506] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:35.588 [2024-11-27 04:37:23.041580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.588 pt1 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.588 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.588 "name": "raid_bdev1", 00:15:35.588 "uuid": "f72deb9c-0058-4e44-b1fa-2fb845174cac", 00:15:35.588 "strip_size_kb": 64, 00:15:35.588 "state": "configuring", 00:15:35.588 "raid_level": "raid0", 00:15:35.588 "superblock": true, 00:15:35.588 "num_base_bdevs": 4, 00:15:35.588 "num_base_bdevs_discovered": 1, 00:15:35.588 "num_base_bdevs_operational": 4, 00:15:35.589 "base_bdevs_list": [ 00:15:35.589 { 00:15:35.589 "name": "pt1", 00:15:35.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.589 "is_configured": true, 00:15:35.589 "data_offset": 2048, 00:15:35.589 "data_size": 63488 00:15:35.589 }, 00:15:35.589 { 00:15:35.589 "name": null, 00:15:35.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.589 "is_configured": false, 00:15:35.589 "data_offset": 2048, 00:15:35.589 "data_size": 63488 00:15:35.589 }, 00:15:35.589 { 00:15:35.589 "name": null, 00:15:35.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.589 "is_configured": false, 00:15:35.589 "data_offset": 2048, 00:15:35.589 "data_size": 63488 00:15:35.589 }, 00:15:35.589 { 00:15:35.589 "name": null, 00:15:35.589 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:35.589 "is_configured": false, 00:15:35.589 "data_offset": 2048, 00:15:35.589 "data_size": 63488 00:15:35.589 } 00:15:35.589 ] 00:15:35.589 }' 00:15:35.589 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.589 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.156 [2024-11-27 04:37:23.538096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.156 [2024-11-27 04:37:23.538316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.156 [2024-11-27 04:37:23.538355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:36.156 [2024-11-27 04:37:23.538374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.156 [2024-11-27 04:37:23.538952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.156 [2024-11-27 04:37:23.538982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.156 [2024-11-27 04:37:23.539083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:36.156 [2024-11-27 04:37:23.539120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.156 pt2 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.156 [2024-11-27 04:37:23.546061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.156 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.157 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.157 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.157 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.157 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.157 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.157 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.157 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.157 "name": "raid_bdev1", 00:15:36.157 "uuid": "f72deb9c-0058-4e44-b1fa-2fb845174cac", 00:15:36.157 "strip_size_kb": 64, 00:15:36.157 "state": "configuring", 00:15:36.157 "raid_level": "raid0", 00:15:36.157 "superblock": true, 00:15:36.157 "num_base_bdevs": 4, 00:15:36.157 "num_base_bdevs_discovered": 1, 00:15:36.157 "num_base_bdevs_operational": 4, 00:15:36.157 "base_bdevs_list": [ 00:15:36.157 { 00:15:36.157 "name": "pt1", 00:15:36.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.157 "is_configured": true, 00:15:36.157 "data_offset": 2048, 00:15:36.157 "data_size": 63488 00:15:36.157 }, 00:15:36.157 { 00:15:36.157 "name": null, 00:15:36.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.157 "is_configured": false, 00:15:36.157 "data_offset": 0, 00:15:36.157 "data_size": 63488 00:15:36.157 }, 00:15:36.157 { 00:15:36.157 "name": null, 00:15:36.157 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.157 "is_configured": false, 00:15:36.157 "data_offset": 2048, 00:15:36.157 "data_size": 63488 00:15:36.157 }, 00:15:36.157 { 00:15:36.157 "name": null, 00:15:36.157 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:36.157 "is_configured": false, 00:15:36.157 "data_offset": 2048, 00:15:36.157 "data_size": 63488 00:15:36.157 } 00:15:36.157 ] 00:15:36.157 }' 00:15:36.157 04:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.157 04:37:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.724 [2024-11-27 04:37:24.062233] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.724 [2024-11-27 04:37:24.062447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.724 [2024-11-27 04:37:24.062521] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:36.724 [2024-11-27 04:37:24.062661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.724 [2024-11-27 04:37:24.063259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.724 [2024-11-27 04:37:24.063286] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.724 [2024-11-27 04:37:24.063392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:36.724 [2024-11-27 04:37:24.063424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.724 pt2 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.724 [2024-11-27 04:37:24.070182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:36.724 [2024-11-27 04:37:24.070355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.724 [2024-11-27 04:37:24.070425] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:36.724 [2024-11-27 04:37:24.070545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.724 [2024-11-27 04:37:24.071060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.724 [2024-11-27 04:37:24.071202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:36.724 [2024-11-27 04:37:24.071402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:36.724 [2024-11-27 04:37:24.071560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:36.724 pt3 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.724 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.724 [2024-11-27 04:37:24.082190] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:36.724 [2024-11-27 04:37:24.082346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.724 [2024-11-27 04:37:24.082414] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:36.724 [2024-11-27 04:37:24.082434] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.724 [2024-11-27 04:37:24.082902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.724 [2024-11-27 04:37:24.082928] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:36.724 [2024-11-27 04:37:24.083007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:36.724 [2024-11-27 04:37:24.083039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:36.724 [2024-11-27 04:37:24.083203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:36.724 [2024-11-27 04:37:24.083219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:36.724 [2024-11-27 04:37:24.083518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:36.724 [2024-11-27 04:37:24.083702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:36.725 [2024-11-27 04:37:24.083723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:36.725 [2024-11-27 04:37:24.083895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.725 pt4 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.725 "name": "raid_bdev1", 00:15:36.725 "uuid": "f72deb9c-0058-4e44-b1fa-2fb845174cac", 00:15:36.725 "strip_size_kb": 64, 00:15:36.725 "state": "online", 00:15:36.725 "raid_level": "raid0", 00:15:36.725 "superblock": true, 00:15:36.725 "num_base_bdevs": 4, 00:15:36.725 "num_base_bdevs_discovered": 4, 00:15:36.725 "num_base_bdevs_operational": 4, 00:15:36.725 "base_bdevs_list": [ 00:15:36.725 { 00:15:36.725 "name": "pt1", 00:15:36.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.725 "is_configured": true, 00:15:36.725 "data_offset": 2048, 00:15:36.725 "data_size": 63488 00:15:36.725 }, 00:15:36.725 { 00:15:36.725 "name": "pt2", 00:15:36.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.725 "is_configured": true, 00:15:36.725 "data_offset": 2048, 00:15:36.725 "data_size": 63488 00:15:36.725 }, 00:15:36.725 { 00:15:36.725 "name": "pt3", 00:15:36.725 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.725 "is_configured": true, 00:15:36.725 "data_offset": 2048, 00:15:36.725 "data_size": 63488 00:15:36.725 }, 00:15:36.725 { 00:15:36.725 "name": "pt4", 00:15:36.725 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:36.725 "is_configured": true, 00:15:36.725 "data_offset": 2048, 00:15:36.725 "data_size": 63488 00:15:36.725 } 00:15:36.725 ] 00:15:36.725 }' 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.725 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.292 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:37.292 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:37.292 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.292 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.292 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.293 [2024-11-27 04:37:24.642763] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.293 "name": "raid_bdev1", 00:15:37.293 "aliases": [ 00:15:37.293 "f72deb9c-0058-4e44-b1fa-2fb845174cac" 00:15:37.293 ], 00:15:37.293 "product_name": "Raid Volume", 00:15:37.293 "block_size": 512, 00:15:37.293 "num_blocks": 253952, 00:15:37.293 "uuid": "f72deb9c-0058-4e44-b1fa-2fb845174cac", 00:15:37.293 "assigned_rate_limits": { 00:15:37.293 "rw_ios_per_sec": 0, 00:15:37.293 "rw_mbytes_per_sec": 0, 00:15:37.293 "r_mbytes_per_sec": 0, 00:15:37.293 "w_mbytes_per_sec": 0 00:15:37.293 }, 00:15:37.293 "claimed": false, 00:15:37.293 "zoned": false, 00:15:37.293 "supported_io_types": { 00:15:37.293 "read": true, 00:15:37.293 "write": true, 00:15:37.293 "unmap": true, 00:15:37.293 "flush": true, 00:15:37.293 "reset": true, 00:15:37.293 "nvme_admin": false, 00:15:37.293 "nvme_io": false, 00:15:37.293 "nvme_io_md": false, 00:15:37.293 "write_zeroes": true, 00:15:37.293 "zcopy": false, 00:15:37.293 "get_zone_info": false, 00:15:37.293 "zone_management": false, 00:15:37.293 "zone_append": false, 00:15:37.293 "compare": false, 00:15:37.293 "compare_and_write": false, 00:15:37.293 "abort": false, 00:15:37.293 "seek_hole": false, 00:15:37.293 "seek_data": false, 00:15:37.293 "copy": false, 00:15:37.293 "nvme_iov_md": false 00:15:37.293 }, 00:15:37.293 "memory_domains": [ 00:15:37.293 { 00:15:37.293 "dma_device_id": "system", 00:15:37.293 "dma_device_type": 1 00:15:37.293 }, 00:15:37.293 { 00:15:37.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.293 "dma_device_type": 2 00:15:37.293 }, 00:15:37.293 { 00:15:37.293 "dma_device_id": "system", 00:15:37.293 "dma_device_type": 1 00:15:37.293 }, 00:15:37.293 { 00:15:37.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.293 "dma_device_type": 2 00:15:37.293 }, 00:15:37.293 { 00:15:37.293 "dma_device_id": "system", 00:15:37.293 "dma_device_type": 1 00:15:37.293 }, 00:15:37.293 { 00:15:37.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.293 "dma_device_type": 2 00:15:37.293 }, 00:15:37.293 { 00:15:37.293 "dma_device_id": "system", 00:15:37.293 "dma_device_type": 1 00:15:37.293 }, 00:15:37.293 { 00:15:37.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.293 "dma_device_type": 2 00:15:37.293 } 00:15:37.293 ], 00:15:37.293 "driver_specific": { 00:15:37.293 "raid": { 00:15:37.293 "uuid": "f72deb9c-0058-4e44-b1fa-2fb845174cac", 00:15:37.293 "strip_size_kb": 64, 00:15:37.293 "state": "online", 00:15:37.293 "raid_level": "raid0", 00:15:37.293 "superblock": true, 00:15:37.293 "num_base_bdevs": 4, 00:15:37.293 "num_base_bdevs_discovered": 4, 00:15:37.293 "num_base_bdevs_operational": 4, 00:15:37.293 "base_bdevs_list": [ 00:15:37.293 { 00:15:37.293 "name": "pt1", 00:15:37.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.293 "is_configured": true, 00:15:37.293 "data_offset": 2048, 00:15:37.293 "data_size": 63488 00:15:37.293 }, 00:15:37.293 { 00:15:37.293 "name": "pt2", 00:15:37.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.293 "is_configured": true, 00:15:37.293 "data_offset": 2048, 00:15:37.293 "data_size": 63488 00:15:37.293 }, 00:15:37.293 { 00:15:37.293 "name": "pt3", 00:15:37.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.293 "is_configured": true, 00:15:37.293 "data_offset": 2048, 00:15:37.293 "data_size": 63488 00:15:37.293 }, 00:15:37.293 { 00:15:37.293 "name": "pt4", 00:15:37.293 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:37.293 "is_configured": true, 00:15:37.293 "data_offset": 2048, 00:15:37.293 "data_size": 63488 00:15:37.293 } 00:15:37.293 ] 00:15:37.293 } 00:15:37.293 } 00:15:37.293 }' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:37.293 pt2 00:15:37.293 pt3 00:15:37.293 pt4' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.293 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:37.553 [2024-11-27 04:37:24.978809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.553 04:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f72deb9c-0058-4e44-b1fa-2fb845174cac '!=' f72deb9c-0058-4e44-b1fa-2fb845174cac ']' 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70948 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70948 ']' 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70948 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70948 00:15:37.553 killing process with pid 70948 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70948' 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70948 00:15:37.553 [2024-11-27 04:37:25.061558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.553 [2024-11-27 04:37:25.061663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.553 04:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70948 00:15:37.553 [2024-11-27 04:37:25.061758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.553 [2024-11-27 04:37:25.061797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:37.812 [2024-11-27 04:37:25.418828] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.189 04:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:39.189 00:15:39.189 real 0m5.822s 00:15:39.189 user 0m8.721s 00:15:39.189 sys 0m0.834s 00:15:39.189 04:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.189 ************************************ 00:15:39.189 END TEST raid_superblock_test 00:15:39.189 ************************************ 00:15:39.189 04:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.189 04:37:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:15:39.189 04:37:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:39.189 04:37:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.189 04:37:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:39.189 ************************************ 00:15:39.189 START TEST raid_read_error_test 00:15:39.189 ************************************ 00:15:39.189 04:37:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:15:39.189 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:39.189 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:39.189 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:39.189 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:39.189 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:39.189 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:39.189 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:39.189 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:39.189 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3tfB8gkgUU 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71207 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71207 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71207 ']' 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.190 04:37:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.190 [2024-11-27 04:37:26.627086] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:15:39.190 [2024-11-27 04:37:26.627404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71207 ] 00:15:39.190 [2024-11-27 04:37:26.800608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.449 [2024-11-27 04:37:26.929904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.707 [2024-11-27 04:37:27.132057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.707 [2024-11-27 04:37:27.132334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.274 BaseBdev1_malloc 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.274 true 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.274 [2024-11-27 04:37:27.656412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:40.274 [2024-11-27 04:37:27.656616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.274 [2024-11-27 04:37:27.656690] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:40.274 [2024-11-27 04:37:27.656716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.274 [2024-11-27 04:37:27.659635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.274 [2024-11-27 04:37:27.659854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:40.274 BaseBdev1 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.274 BaseBdev2_malloc 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.274 true 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.274 [2024-11-27 04:37:27.720618] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:40.274 [2024-11-27 04:37:27.720687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.274 [2024-11-27 04:37:27.720714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:40.274 [2024-11-27 04:37:27.720731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.274 [2024-11-27 04:37:27.723519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.274 [2024-11-27 04:37:27.723571] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:40.274 BaseBdev2 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.274 BaseBdev3_malloc 00:15:40.274 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.275 true 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.275 [2024-11-27 04:37:27.784036] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:40.275 [2024-11-27 04:37:27.784223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.275 [2024-11-27 04:37:27.784352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:40.275 [2024-11-27 04:37:27.784384] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.275 [2024-11-27 04:37:27.787186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.275 [2024-11-27 04:37:27.787248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:40.275 BaseBdev3 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.275 BaseBdev4_malloc 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.275 true 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.275 [2024-11-27 04:37:27.843905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:40.275 [2024-11-27 04:37:27.844087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.275 [2024-11-27 04:37:27.844160] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:40.275 [2024-11-27 04:37:27.844214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.275 [2024-11-27 04:37:27.847047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.275 BaseBdev4 00:15:40.275 [2024-11-27 04:37:27.847205] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.275 [2024-11-27 04:37:27.852058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.275 [2024-11-27 04:37:27.854635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.275 [2024-11-27 04:37:27.854901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.275 [2024-11-27 04:37:27.855120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:40.275 [2024-11-27 04:37:27.855554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:40.275 [2024-11-27 04:37:27.855687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:40.275 [2024-11-27 04:37:27.856098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:40.275 [2024-11-27 04:37:27.856441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:40.275 [2024-11-27 04:37:27.856564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:40.275 [2024-11-27 04:37:27.856958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.275 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.532 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.532 "name": "raid_bdev1", 00:15:40.532 "uuid": "cd0364ce-797a-4a0c-9eb0-0569397b4644", 00:15:40.532 "strip_size_kb": 64, 00:15:40.532 "state": "online", 00:15:40.532 "raid_level": "raid0", 00:15:40.532 "superblock": true, 00:15:40.532 "num_base_bdevs": 4, 00:15:40.532 "num_base_bdevs_discovered": 4, 00:15:40.532 "num_base_bdevs_operational": 4, 00:15:40.532 "base_bdevs_list": [ 00:15:40.532 { 00:15:40.533 "name": "BaseBdev1", 00:15:40.533 "uuid": "e576dd6c-fdb2-535b-a97a-90542881f6d6", 00:15:40.533 "is_configured": true, 00:15:40.533 "data_offset": 2048, 00:15:40.533 "data_size": 63488 00:15:40.533 }, 00:15:40.533 { 00:15:40.533 "name": "BaseBdev2", 00:15:40.533 "uuid": "8b7f3dce-8969-5734-b951-748d74da2504", 00:15:40.533 "is_configured": true, 00:15:40.533 "data_offset": 2048, 00:15:40.533 "data_size": 63488 00:15:40.533 }, 00:15:40.533 { 00:15:40.533 "name": "BaseBdev3", 00:15:40.533 "uuid": "0b46f1a9-4308-5b63-a0c6-29399319055a", 00:15:40.533 "is_configured": true, 00:15:40.533 "data_offset": 2048, 00:15:40.533 "data_size": 63488 00:15:40.533 }, 00:15:40.533 { 00:15:40.533 "name": "BaseBdev4", 00:15:40.533 "uuid": "929a3648-7adb-544a-8e1d-f89592a2cbd4", 00:15:40.533 "is_configured": true, 00:15:40.533 "data_offset": 2048, 00:15:40.533 "data_size": 63488 00:15:40.533 } 00:15:40.533 ] 00:15:40.533 }' 00:15:40.533 04:37:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.533 04:37:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.790 04:37:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:40.790 04:37:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:41.095 [2024-11-27 04:37:28.482539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.027 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.027 "name": "raid_bdev1", 00:15:42.027 "uuid": "cd0364ce-797a-4a0c-9eb0-0569397b4644", 00:15:42.027 "strip_size_kb": 64, 00:15:42.027 "state": "online", 00:15:42.027 "raid_level": "raid0", 00:15:42.027 "superblock": true, 00:15:42.027 "num_base_bdevs": 4, 00:15:42.027 "num_base_bdevs_discovered": 4, 00:15:42.027 "num_base_bdevs_operational": 4, 00:15:42.027 "base_bdevs_list": [ 00:15:42.027 { 00:15:42.027 "name": "BaseBdev1", 00:15:42.027 "uuid": "e576dd6c-fdb2-535b-a97a-90542881f6d6", 00:15:42.027 "is_configured": true, 00:15:42.027 "data_offset": 2048, 00:15:42.027 "data_size": 63488 00:15:42.027 }, 00:15:42.027 { 00:15:42.027 "name": "BaseBdev2", 00:15:42.027 "uuid": "8b7f3dce-8969-5734-b951-748d74da2504", 00:15:42.027 "is_configured": true, 00:15:42.027 "data_offset": 2048, 00:15:42.028 "data_size": 63488 00:15:42.028 }, 00:15:42.028 { 00:15:42.028 "name": "BaseBdev3", 00:15:42.028 "uuid": "0b46f1a9-4308-5b63-a0c6-29399319055a", 00:15:42.028 "is_configured": true, 00:15:42.028 "data_offset": 2048, 00:15:42.028 "data_size": 63488 00:15:42.028 }, 00:15:42.028 { 00:15:42.028 "name": "BaseBdev4", 00:15:42.028 "uuid": "929a3648-7adb-544a-8e1d-f89592a2cbd4", 00:15:42.028 "is_configured": true, 00:15:42.028 "data_offset": 2048, 00:15:42.028 "data_size": 63488 00:15:42.028 } 00:15:42.028 ] 00:15:42.028 }' 00:15:42.028 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.028 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.286 [2024-11-27 04:37:29.860711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.286 [2024-11-27 04:37:29.860750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.286 [2024-11-27 04:37:29.864292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.286 [2024-11-27 04:37:29.864365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.286 [2024-11-27 04:37:29.864426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.286 [2024-11-27 04:37:29.864445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:42.286 { 00:15:42.286 "results": [ 00:15:42.286 { 00:15:42.286 "job": "raid_bdev1", 00:15:42.286 "core_mask": "0x1", 00:15:42.286 "workload": "randrw", 00:15:42.286 "percentage": 50, 00:15:42.286 "status": "finished", 00:15:42.286 "queue_depth": 1, 00:15:42.286 "io_size": 131072, 00:15:42.286 "runtime": 1.375875, 00:15:42.286 "iops": 10434.814209139638, 00:15:42.286 "mibps": 1304.3517761424548, 00:15:42.286 "io_failed": 1, 00:15:42.286 "io_timeout": 0, 00:15:42.286 "avg_latency_us": 133.08248451924175, 00:15:42.286 "min_latency_us": 40.96, 00:15:42.286 "max_latency_us": 1832.0290909090909 00:15:42.286 } 00:15:42.286 ], 00:15:42.286 "core_count": 1 00:15:42.286 } 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71207 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71207 ']' 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71207 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71207 00:15:42.286 killing process with pid 71207 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71207' 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71207 00:15:42.286 [2024-11-27 04:37:29.898482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.286 04:37:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71207 00:15:42.853 [2024-11-27 04:37:30.188559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.787 04:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3tfB8gkgUU 00:15:43.787 04:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:43.787 04:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:43.787 04:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:15:43.787 04:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:43.787 04:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:43.787 04:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:43.787 ************************************ 00:15:43.787 END TEST raid_read_error_test 00:15:43.787 ************************************ 00:15:43.787 04:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:15:43.787 00:15:43.787 real 0m4.776s 00:15:43.787 user 0m5.872s 00:15:43.787 sys 0m0.573s 00:15:43.787 04:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.787 04:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.787 04:37:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:15:43.787 04:37:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:43.787 04:37:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.787 04:37:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.787 ************************************ 00:15:43.787 START TEST raid_write_error_test 00:15:43.787 ************************************ 00:15:43.787 04:37:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:15:43.787 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:43.787 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:43.787 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:43.787 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:43.787 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kLqDhxDh0e 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71358 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71358 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71358 ']' 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.788 04:37:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.046 [2024-11-27 04:37:31.474461] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:15:44.046 [2024-11-27 04:37:31.474658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71358 ] 00:15:44.046 [2024-11-27 04:37:31.660916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.305 [2024-11-27 04:37:31.791086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.564 [2024-11-27 04:37:31.997371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.564 [2024-11-27 04:37:31.997448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 BaseBdev1_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 true 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 [2024-11-27 04:37:32.524195] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:45.137 [2024-11-27 04:37:32.524426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.137 [2024-11-27 04:37:32.524467] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:45.137 [2024-11-27 04:37:32.524487] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.137 [2024-11-27 04:37:32.527310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.137 [2024-11-27 04:37:32.527383] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.137 BaseBdev1 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 BaseBdev2_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 true 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 [2024-11-27 04:37:32.588272] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:45.137 [2024-11-27 04:37:32.588352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.137 [2024-11-27 04:37:32.588380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:45.137 [2024-11-27 04:37:32.588397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.137 [2024-11-27 04:37:32.591203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.137 [2024-11-27 04:37:32.591252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:45.137 BaseBdev2 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 BaseBdev3_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 true 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 [2024-11-27 04:37:32.666905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:45.137 [2024-11-27 04:37:32.666975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.137 [2024-11-27 04:37:32.667002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:45.137 [2024-11-27 04:37:32.667020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.137 [2024-11-27 04:37:32.669852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.137 [2024-11-27 04:37:32.669902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:45.137 BaseBdev3 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 BaseBdev4_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 true 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 [2024-11-27 04:37:32.726855] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:45.137 [2024-11-27 04:37:32.728236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.137 [2024-11-27 04:37:32.728274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:45.137 [2024-11-27 04:37:32.728294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.137 BaseBdev4 00:15:45.137 [2024-11-27 04:37:32.731083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.137 [2024-11-27 04:37:32.731133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.137 [2024-11-27 04:37:32.735373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.137 [2024-11-27 04:37:32.737977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.137 [2024-11-27 04:37:32.738204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:45.137 [2024-11-27 04:37:32.738421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:45.137 [2024-11-27 04:37:32.738837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:45.137 [2024-11-27 04:37:32.738869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:45.137 [2024-11-27 04:37:32.739186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:45.137 [2024-11-27 04:37:32.739405] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:45.137 [2024-11-27 04:37:32.739423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:45.137 [2024-11-27 04:37:32.739675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.137 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.138 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.397 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.398 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.398 "name": "raid_bdev1", 00:15:45.398 "uuid": "e6fde3b1-9aa0-4a3f-acb7-0a3a2ce53ee5", 00:15:45.398 "strip_size_kb": 64, 00:15:45.398 "state": "online", 00:15:45.398 "raid_level": "raid0", 00:15:45.398 "superblock": true, 00:15:45.398 "num_base_bdevs": 4, 00:15:45.398 "num_base_bdevs_discovered": 4, 00:15:45.398 "num_base_bdevs_operational": 4, 00:15:45.398 "base_bdevs_list": [ 00:15:45.398 { 00:15:45.398 "name": "BaseBdev1", 00:15:45.398 "uuid": "9e2ae93e-6b18-5ee2-b0b9-bb2361746d26", 00:15:45.398 "is_configured": true, 00:15:45.398 "data_offset": 2048, 00:15:45.398 "data_size": 63488 00:15:45.398 }, 00:15:45.398 { 00:15:45.398 "name": "BaseBdev2", 00:15:45.398 "uuid": "ba2533fc-0f5e-5250-98cf-fea194d7fae4", 00:15:45.398 "is_configured": true, 00:15:45.398 "data_offset": 2048, 00:15:45.398 "data_size": 63488 00:15:45.398 }, 00:15:45.398 { 00:15:45.398 "name": "BaseBdev3", 00:15:45.398 "uuid": "387d146a-7640-5291-8673-864131b4f7dd", 00:15:45.398 "is_configured": true, 00:15:45.398 "data_offset": 2048, 00:15:45.398 "data_size": 63488 00:15:45.398 }, 00:15:45.398 { 00:15:45.398 "name": "BaseBdev4", 00:15:45.398 "uuid": "20b96c81-a4c6-5492-a4a1-d6e06a7ae5d5", 00:15:45.398 "is_configured": true, 00:15:45.398 "data_offset": 2048, 00:15:45.398 "data_size": 63488 00:15:45.398 } 00:15:45.398 ] 00:15:45.398 }' 00:15:45.398 04:37:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.398 04:37:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.657 04:37:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:45.657 04:37:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:45.916 [2024-11-27 04:37:33.361194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.852 "name": "raid_bdev1", 00:15:46.852 "uuid": "e6fde3b1-9aa0-4a3f-acb7-0a3a2ce53ee5", 00:15:46.852 "strip_size_kb": 64, 00:15:46.852 "state": "online", 00:15:46.852 "raid_level": "raid0", 00:15:46.852 "superblock": true, 00:15:46.852 "num_base_bdevs": 4, 00:15:46.852 "num_base_bdevs_discovered": 4, 00:15:46.852 "num_base_bdevs_operational": 4, 00:15:46.852 "base_bdevs_list": [ 00:15:46.852 { 00:15:46.852 "name": "BaseBdev1", 00:15:46.852 "uuid": "9e2ae93e-6b18-5ee2-b0b9-bb2361746d26", 00:15:46.852 "is_configured": true, 00:15:46.852 "data_offset": 2048, 00:15:46.852 "data_size": 63488 00:15:46.852 }, 00:15:46.852 { 00:15:46.852 "name": "BaseBdev2", 00:15:46.852 "uuid": "ba2533fc-0f5e-5250-98cf-fea194d7fae4", 00:15:46.852 "is_configured": true, 00:15:46.852 "data_offset": 2048, 00:15:46.852 "data_size": 63488 00:15:46.852 }, 00:15:46.852 { 00:15:46.852 "name": "BaseBdev3", 00:15:46.852 "uuid": "387d146a-7640-5291-8673-864131b4f7dd", 00:15:46.852 "is_configured": true, 00:15:46.852 "data_offset": 2048, 00:15:46.852 "data_size": 63488 00:15:46.852 }, 00:15:46.852 { 00:15:46.852 "name": "BaseBdev4", 00:15:46.852 "uuid": "20b96c81-a4c6-5492-a4a1-d6e06a7ae5d5", 00:15:46.852 "is_configured": true, 00:15:46.852 "data_offset": 2048, 00:15:46.852 "data_size": 63488 00:15:46.852 } 00:15:46.852 ] 00:15:46.852 }' 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.852 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.419 [2024-11-27 04:37:34.768103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.419 [2024-11-27 04:37:34.768142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.419 [2024-11-27 04:37:34.771570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.419 [2024-11-27 04:37:34.771803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.419 [2024-11-27 04:37:34.771880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.419 [2024-11-27 04:37:34.771900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.419 { 00:15:47.419 "results": [ 00:15:47.419 { 00:15:47.419 "job": "raid_bdev1", 00:15:47.419 "core_mask": "0x1", 00:15:47.419 "workload": "randrw", 00:15:47.419 "percentage": 50, 00:15:47.419 "status": "finished", 00:15:47.419 "queue_depth": 1, 00:15:47.419 "io_size": 131072, 00:15:47.419 "runtime": 1.404552, 00:15:47.419 "iops": 10203.253421731626, 00:15:47.419 "mibps": 1275.4066777164533, 00:15:47.419 "io_failed": 1, 00:15:47.419 "io_timeout": 0, 00:15:47.419 "avg_latency_us": 136.55204526425294, 00:15:47.419 "min_latency_us": 41.42545454545454, 00:15:47.419 "max_latency_us": 2263.970909090909 00:15:47.419 } 00:15:47.419 ], 00:15:47.419 "core_count": 1 00:15:47.419 } 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71358 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71358 ']' 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71358 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71358 00:15:47.419 killing process with pid 71358 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71358' 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71358 00:15:47.419 04:37:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71358 00:15:47.419 [2024-11-27 04:37:34.803727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.679 [2024-11-27 04:37:35.097717] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.613 04:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kLqDhxDh0e 00:15:48.613 04:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:48.613 04:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:48.613 04:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:48.613 04:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:48.613 04:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.613 04:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:48.613 04:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:48.613 00:15:48.613 real 0m4.873s 00:15:48.613 user 0m5.994s 00:15:48.613 sys 0m0.586s 00:15:48.613 04:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.613 04:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.613 ************************************ 00:15:48.613 END TEST raid_write_error_test 00:15:48.613 ************************************ 00:15:48.872 04:37:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:48.872 04:37:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:15:48.872 04:37:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:48.872 04:37:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.872 04:37:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.872 ************************************ 00:15:48.872 START TEST raid_state_function_test 00:15:48.872 ************************************ 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71502 00:15:48.872 Process raid pid: 71502 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71502' 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71502 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71502 ']' 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.872 04:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.872 [2024-11-27 04:37:36.400225] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:15:48.873 [2024-11-27 04:37:36.400436] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.131 [2024-11-27 04:37:36.589210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.131 [2024-11-27 04:37:36.721431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.389 [2024-11-27 04:37:36.928827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.389 [2024-11-27 04:37:36.928881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.956 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.956 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:49.956 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:49.956 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.956 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.956 [2024-11-27 04:37:37.443139] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.957 [2024-11-27 04:37:37.443214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.957 [2024-11-27 04:37:37.443232] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.957 [2024-11-27 04:37:37.443247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.957 [2024-11-27 04:37:37.443258] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.957 [2024-11-27 04:37:37.443273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.957 [2024-11-27 04:37:37.443283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:49.957 [2024-11-27 04:37:37.443297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.957 "name": "Existed_Raid", 00:15:49.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.957 "strip_size_kb": 64, 00:15:49.957 "state": "configuring", 00:15:49.957 "raid_level": "concat", 00:15:49.957 "superblock": false, 00:15:49.957 "num_base_bdevs": 4, 00:15:49.957 "num_base_bdevs_discovered": 0, 00:15:49.957 "num_base_bdevs_operational": 4, 00:15:49.957 "base_bdevs_list": [ 00:15:49.957 { 00:15:49.957 "name": "BaseBdev1", 00:15:49.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.957 "is_configured": false, 00:15:49.957 "data_offset": 0, 00:15:49.957 "data_size": 0 00:15:49.957 }, 00:15:49.957 { 00:15:49.957 "name": "BaseBdev2", 00:15:49.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.957 "is_configured": false, 00:15:49.957 "data_offset": 0, 00:15:49.957 "data_size": 0 00:15:49.957 }, 00:15:49.957 { 00:15:49.957 "name": "BaseBdev3", 00:15:49.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.957 "is_configured": false, 00:15:49.957 "data_offset": 0, 00:15:49.957 "data_size": 0 00:15:49.957 }, 00:15:49.957 { 00:15:49.957 "name": "BaseBdev4", 00:15:49.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.957 "is_configured": false, 00:15:49.957 "data_offset": 0, 00:15:49.957 "data_size": 0 00:15:49.957 } 00:15:49.957 ] 00:15:49.957 }' 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.957 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.522 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.522 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.522 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.522 [2024-11-27 04:37:37.959163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.522 [2024-11-27 04:37:37.959263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:50.522 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.522 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:50.522 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.522 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.522 [2024-11-27 04:37:37.967137] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.522 [2024-11-27 04:37:37.967187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.522 [2024-11-27 04:37:37.967203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.522 [2024-11-27 04:37:37.967218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.522 [2024-11-27 04:37:37.967228] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.522 [2024-11-27 04:37:37.967242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.522 [2024-11-27 04:37:37.967252] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:50.522 [2024-11-27 04:37:37.967266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:50.522 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.522 04:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:50.522 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.523 04:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 [2024-11-27 04:37:38.013453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.523 BaseBdev1 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 [ 00:15:50.523 { 00:15:50.523 "name": "BaseBdev1", 00:15:50.523 "aliases": [ 00:15:50.523 "576be0b2-3c85-4568-aabd-487138c06a69" 00:15:50.523 ], 00:15:50.523 "product_name": "Malloc disk", 00:15:50.523 "block_size": 512, 00:15:50.523 "num_blocks": 65536, 00:15:50.523 "uuid": "576be0b2-3c85-4568-aabd-487138c06a69", 00:15:50.523 "assigned_rate_limits": { 00:15:50.523 "rw_ios_per_sec": 0, 00:15:50.523 "rw_mbytes_per_sec": 0, 00:15:50.523 "r_mbytes_per_sec": 0, 00:15:50.523 "w_mbytes_per_sec": 0 00:15:50.523 }, 00:15:50.523 "claimed": true, 00:15:50.523 "claim_type": "exclusive_write", 00:15:50.523 "zoned": false, 00:15:50.523 "supported_io_types": { 00:15:50.523 "read": true, 00:15:50.523 "write": true, 00:15:50.523 "unmap": true, 00:15:50.523 "flush": true, 00:15:50.523 "reset": true, 00:15:50.523 "nvme_admin": false, 00:15:50.523 "nvme_io": false, 00:15:50.523 "nvme_io_md": false, 00:15:50.523 "write_zeroes": true, 00:15:50.523 "zcopy": true, 00:15:50.523 "get_zone_info": false, 00:15:50.523 "zone_management": false, 00:15:50.523 "zone_append": false, 00:15:50.523 "compare": false, 00:15:50.523 "compare_and_write": false, 00:15:50.523 "abort": true, 00:15:50.523 "seek_hole": false, 00:15:50.523 "seek_data": false, 00:15:50.523 "copy": true, 00:15:50.523 "nvme_iov_md": false 00:15:50.523 }, 00:15:50.523 "memory_domains": [ 00:15:50.523 { 00:15:50.523 "dma_device_id": "system", 00:15:50.523 "dma_device_type": 1 00:15:50.523 }, 00:15:50.523 { 00:15:50.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.523 "dma_device_type": 2 00:15:50.523 } 00:15:50.523 ], 00:15:50.523 "driver_specific": {} 00:15:50.523 } 00:15:50.523 ] 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.523 "name": "Existed_Raid", 00:15:50.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.523 "strip_size_kb": 64, 00:15:50.523 "state": "configuring", 00:15:50.523 "raid_level": "concat", 00:15:50.523 "superblock": false, 00:15:50.523 "num_base_bdevs": 4, 00:15:50.523 "num_base_bdevs_discovered": 1, 00:15:50.523 "num_base_bdevs_operational": 4, 00:15:50.523 "base_bdevs_list": [ 00:15:50.523 { 00:15:50.523 "name": "BaseBdev1", 00:15:50.523 "uuid": "576be0b2-3c85-4568-aabd-487138c06a69", 00:15:50.523 "is_configured": true, 00:15:50.523 "data_offset": 0, 00:15:50.523 "data_size": 65536 00:15:50.523 }, 00:15:50.523 { 00:15:50.523 "name": "BaseBdev2", 00:15:50.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.523 "is_configured": false, 00:15:50.523 "data_offset": 0, 00:15:50.523 "data_size": 0 00:15:50.523 }, 00:15:50.523 { 00:15:50.523 "name": "BaseBdev3", 00:15:50.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.523 "is_configured": false, 00:15:50.523 "data_offset": 0, 00:15:50.523 "data_size": 0 00:15:50.523 }, 00:15:50.523 { 00:15:50.523 "name": "BaseBdev4", 00:15:50.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.523 "is_configured": false, 00:15:50.523 "data_offset": 0, 00:15:50.523 "data_size": 0 00:15:50.523 } 00:15:50.523 ] 00:15:50.523 }' 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.523 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.094 [2024-11-27 04:37:38.573672] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.094 [2024-11-27 04:37:38.573745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.094 [2024-11-27 04:37:38.581744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.094 [2024-11-27 04:37:38.584358] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.094 [2024-11-27 04:37:38.584435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.094 [2024-11-27 04:37:38.584464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.094 [2024-11-27 04:37:38.584494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.094 [2024-11-27 04:37:38.584513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:51.094 [2024-11-27 04:37:38.584537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.094 "name": "Existed_Raid", 00:15:51.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.094 "strip_size_kb": 64, 00:15:51.094 "state": "configuring", 00:15:51.094 "raid_level": "concat", 00:15:51.094 "superblock": false, 00:15:51.094 "num_base_bdevs": 4, 00:15:51.094 "num_base_bdevs_discovered": 1, 00:15:51.094 "num_base_bdevs_operational": 4, 00:15:51.094 "base_bdevs_list": [ 00:15:51.094 { 00:15:51.094 "name": "BaseBdev1", 00:15:51.094 "uuid": "576be0b2-3c85-4568-aabd-487138c06a69", 00:15:51.094 "is_configured": true, 00:15:51.094 "data_offset": 0, 00:15:51.094 "data_size": 65536 00:15:51.094 }, 00:15:51.094 { 00:15:51.094 "name": "BaseBdev2", 00:15:51.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.094 "is_configured": false, 00:15:51.094 "data_offset": 0, 00:15:51.094 "data_size": 0 00:15:51.094 }, 00:15:51.094 { 00:15:51.094 "name": "BaseBdev3", 00:15:51.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.094 "is_configured": false, 00:15:51.094 "data_offset": 0, 00:15:51.094 "data_size": 0 00:15:51.094 }, 00:15:51.094 { 00:15:51.094 "name": "BaseBdev4", 00:15:51.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.094 "is_configured": false, 00:15:51.094 "data_offset": 0, 00:15:51.094 "data_size": 0 00:15:51.094 } 00:15:51.094 ] 00:15:51.094 }' 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.094 04:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.678 [2024-11-27 04:37:39.129004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.678 BaseBdev2 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.678 [ 00:15:51.678 { 00:15:51.678 "name": "BaseBdev2", 00:15:51.678 "aliases": [ 00:15:51.678 "c7621343-2b1e-4f9b-856e-fa53ffc4ad65" 00:15:51.678 ], 00:15:51.678 "product_name": "Malloc disk", 00:15:51.678 "block_size": 512, 00:15:51.678 "num_blocks": 65536, 00:15:51.678 "uuid": "c7621343-2b1e-4f9b-856e-fa53ffc4ad65", 00:15:51.678 "assigned_rate_limits": { 00:15:51.678 "rw_ios_per_sec": 0, 00:15:51.678 "rw_mbytes_per_sec": 0, 00:15:51.678 "r_mbytes_per_sec": 0, 00:15:51.678 "w_mbytes_per_sec": 0 00:15:51.678 }, 00:15:51.678 "claimed": true, 00:15:51.678 "claim_type": "exclusive_write", 00:15:51.678 "zoned": false, 00:15:51.678 "supported_io_types": { 00:15:51.678 "read": true, 00:15:51.678 "write": true, 00:15:51.678 "unmap": true, 00:15:51.678 "flush": true, 00:15:51.678 "reset": true, 00:15:51.678 "nvme_admin": false, 00:15:51.678 "nvme_io": false, 00:15:51.678 "nvme_io_md": false, 00:15:51.678 "write_zeroes": true, 00:15:51.678 "zcopy": true, 00:15:51.678 "get_zone_info": false, 00:15:51.678 "zone_management": false, 00:15:51.678 "zone_append": false, 00:15:51.678 "compare": false, 00:15:51.678 "compare_and_write": false, 00:15:51.678 "abort": true, 00:15:51.678 "seek_hole": false, 00:15:51.678 "seek_data": false, 00:15:51.678 "copy": true, 00:15:51.678 "nvme_iov_md": false 00:15:51.678 }, 00:15:51.678 "memory_domains": [ 00:15:51.678 { 00:15:51.678 "dma_device_id": "system", 00:15:51.678 "dma_device_type": 1 00:15:51.678 }, 00:15:51.678 { 00:15:51.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.678 "dma_device_type": 2 00:15:51.678 } 00:15:51.678 ], 00:15:51.678 "driver_specific": {} 00:15:51.678 } 00:15:51.678 ] 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.678 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.679 "name": "Existed_Raid", 00:15:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.679 "strip_size_kb": 64, 00:15:51.679 "state": "configuring", 00:15:51.679 "raid_level": "concat", 00:15:51.679 "superblock": false, 00:15:51.679 "num_base_bdevs": 4, 00:15:51.679 "num_base_bdevs_discovered": 2, 00:15:51.679 "num_base_bdevs_operational": 4, 00:15:51.679 "base_bdevs_list": [ 00:15:51.679 { 00:15:51.679 "name": "BaseBdev1", 00:15:51.679 "uuid": "576be0b2-3c85-4568-aabd-487138c06a69", 00:15:51.679 "is_configured": true, 00:15:51.679 "data_offset": 0, 00:15:51.679 "data_size": 65536 00:15:51.679 }, 00:15:51.679 { 00:15:51.679 "name": "BaseBdev2", 00:15:51.679 "uuid": "c7621343-2b1e-4f9b-856e-fa53ffc4ad65", 00:15:51.679 "is_configured": true, 00:15:51.679 "data_offset": 0, 00:15:51.679 "data_size": 65536 00:15:51.679 }, 00:15:51.679 { 00:15:51.679 "name": "BaseBdev3", 00:15:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.679 "is_configured": false, 00:15:51.679 "data_offset": 0, 00:15:51.679 "data_size": 0 00:15:51.679 }, 00:15:51.679 { 00:15:51.679 "name": "BaseBdev4", 00:15:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.679 "is_configured": false, 00:15:51.679 "data_offset": 0, 00:15:51.679 "data_size": 0 00:15:51.679 } 00:15:51.679 ] 00:15:51.679 }' 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.679 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.251 [2024-11-27 04:37:39.731554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:52.251 BaseBdev3 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.251 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.251 [ 00:15:52.251 { 00:15:52.251 "name": "BaseBdev3", 00:15:52.251 "aliases": [ 00:15:52.251 "dfabf0e6-3563-4eac-b141-dafbecc3c6f0" 00:15:52.251 ], 00:15:52.251 "product_name": "Malloc disk", 00:15:52.251 "block_size": 512, 00:15:52.251 "num_blocks": 65536, 00:15:52.251 "uuid": "dfabf0e6-3563-4eac-b141-dafbecc3c6f0", 00:15:52.251 "assigned_rate_limits": { 00:15:52.251 "rw_ios_per_sec": 0, 00:15:52.251 "rw_mbytes_per_sec": 0, 00:15:52.251 "r_mbytes_per_sec": 0, 00:15:52.251 "w_mbytes_per_sec": 0 00:15:52.251 }, 00:15:52.251 "claimed": true, 00:15:52.251 "claim_type": "exclusive_write", 00:15:52.251 "zoned": false, 00:15:52.251 "supported_io_types": { 00:15:52.251 "read": true, 00:15:52.251 "write": true, 00:15:52.251 "unmap": true, 00:15:52.251 "flush": true, 00:15:52.251 "reset": true, 00:15:52.252 "nvme_admin": false, 00:15:52.252 "nvme_io": false, 00:15:52.252 "nvme_io_md": false, 00:15:52.252 "write_zeroes": true, 00:15:52.252 "zcopy": true, 00:15:52.252 "get_zone_info": false, 00:15:52.252 "zone_management": false, 00:15:52.252 "zone_append": false, 00:15:52.252 "compare": false, 00:15:52.252 "compare_and_write": false, 00:15:52.252 "abort": true, 00:15:52.252 "seek_hole": false, 00:15:52.252 "seek_data": false, 00:15:52.252 "copy": true, 00:15:52.252 "nvme_iov_md": false 00:15:52.252 }, 00:15:52.252 "memory_domains": [ 00:15:52.252 { 00:15:52.252 "dma_device_id": "system", 00:15:52.252 "dma_device_type": 1 00:15:52.252 }, 00:15:52.252 { 00:15:52.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.252 "dma_device_type": 2 00:15:52.252 } 00:15:52.252 ], 00:15:52.252 "driver_specific": {} 00:15:52.252 } 00:15:52.252 ] 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.252 "name": "Existed_Raid", 00:15:52.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.252 "strip_size_kb": 64, 00:15:52.252 "state": "configuring", 00:15:52.252 "raid_level": "concat", 00:15:52.252 "superblock": false, 00:15:52.252 "num_base_bdevs": 4, 00:15:52.252 "num_base_bdevs_discovered": 3, 00:15:52.252 "num_base_bdevs_operational": 4, 00:15:52.252 "base_bdevs_list": [ 00:15:52.252 { 00:15:52.252 "name": "BaseBdev1", 00:15:52.252 "uuid": "576be0b2-3c85-4568-aabd-487138c06a69", 00:15:52.252 "is_configured": true, 00:15:52.252 "data_offset": 0, 00:15:52.252 "data_size": 65536 00:15:52.252 }, 00:15:52.252 { 00:15:52.252 "name": "BaseBdev2", 00:15:52.252 "uuid": "c7621343-2b1e-4f9b-856e-fa53ffc4ad65", 00:15:52.252 "is_configured": true, 00:15:52.252 "data_offset": 0, 00:15:52.252 "data_size": 65536 00:15:52.252 }, 00:15:52.252 { 00:15:52.252 "name": "BaseBdev3", 00:15:52.252 "uuid": "dfabf0e6-3563-4eac-b141-dafbecc3c6f0", 00:15:52.252 "is_configured": true, 00:15:52.252 "data_offset": 0, 00:15:52.252 "data_size": 65536 00:15:52.252 }, 00:15:52.252 { 00:15:52.252 "name": "BaseBdev4", 00:15:52.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.252 "is_configured": false, 00:15:52.252 "data_offset": 0, 00:15:52.252 "data_size": 0 00:15:52.252 } 00:15:52.252 ] 00:15:52.252 }' 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.252 04:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.868 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:52.868 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.868 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.868 [2024-11-27 04:37:40.393302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:52.868 [2024-11-27 04:37:40.393397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:52.868 [2024-11-27 04:37:40.393414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:52.868 [2024-11-27 04:37:40.393938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:52.868 [2024-11-27 04:37:40.394262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:52.868 [2024-11-27 04:37:40.394304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:52.868 [2024-11-27 04:37:40.394690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.868 BaseBdev4 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.869 [ 00:15:52.869 { 00:15:52.869 "name": "BaseBdev4", 00:15:52.869 "aliases": [ 00:15:52.869 "8c57996f-0324-4a4f-bca1-4ca89fd4ad10" 00:15:52.869 ], 00:15:52.869 "product_name": "Malloc disk", 00:15:52.869 "block_size": 512, 00:15:52.869 "num_blocks": 65536, 00:15:52.869 "uuid": "8c57996f-0324-4a4f-bca1-4ca89fd4ad10", 00:15:52.869 "assigned_rate_limits": { 00:15:52.869 "rw_ios_per_sec": 0, 00:15:52.869 "rw_mbytes_per_sec": 0, 00:15:52.869 "r_mbytes_per_sec": 0, 00:15:52.869 "w_mbytes_per_sec": 0 00:15:52.869 }, 00:15:52.869 "claimed": true, 00:15:52.869 "claim_type": "exclusive_write", 00:15:52.869 "zoned": false, 00:15:52.869 "supported_io_types": { 00:15:52.869 "read": true, 00:15:52.869 "write": true, 00:15:52.869 "unmap": true, 00:15:52.869 "flush": true, 00:15:52.869 "reset": true, 00:15:52.869 "nvme_admin": false, 00:15:52.869 "nvme_io": false, 00:15:52.869 "nvme_io_md": false, 00:15:52.869 "write_zeroes": true, 00:15:52.869 "zcopy": true, 00:15:52.869 "get_zone_info": false, 00:15:52.869 "zone_management": false, 00:15:52.869 "zone_append": false, 00:15:52.869 "compare": false, 00:15:52.869 "compare_and_write": false, 00:15:52.869 "abort": true, 00:15:52.869 "seek_hole": false, 00:15:52.869 "seek_data": false, 00:15:52.869 "copy": true, 00:15:52.869 "nvme_iov_md": false 00:15:52.869 }, 00:15:52.869 "memory_domains": [ 00:15:52.869 { 00:15:52.869 "dma_device_id": "system", 00:15:52.869 "dma_device_type": 1 00:15:52.869 }, 00:15:52.869 { 00:15:52.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.869 "dma_device_type": 2 00:15:52.869 } 00:15:52.869 ], 00:15:52.869 "driver_specific": {} 00:15:52.869 } 00:15:52.869 ] 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.869 "name": "Existed_Raid", 00:15:52.869 "uuid": "ebae97af-c97f-4646-a023-d1fc752b3077", 00:15:52.869 "strip_size_kb": 64, 00:15:52.869 "state": "online", 00:15:52.869 "raid_level": "concat", 00:15:52.869 "superblock": false, 00:15:52.869 "num_base_bdevs": 4, 00:15:52.869 "num_base_bdevs_discovered": 4, 00:15:52.869 "num_base_bdevs_operational": 4, 00:15:52.869 "base_bdevs_list": [ 00:15:52.869 { 00:15:52.869 "name": "BaseBdev1", 00:15:52.869 "uuid": "576be0b2-3c85-4568-aabd-487138c06a69", 00:15:52.869 "is_configured": true, 00:15:52.869 "data_offset": 0, 00:15:52.869 "data_size": 65536 00:15:52.869 }, 00:15:52.869 { 00:15:52.869 "name": "BaseBdev2", 00:15:52.869 "uuid": "c7621343-2b1e-4f9b-856e-fa53ffc4ad65", 00:15:52.869 "is_configured": true, 00:15:52.869 "data_offset": 0, 00:15:52.869 "data_size": 65536 00:15:52.869 }, 00:15:52.869 { 00:15:52.869 "name": "BaseBdev3", 00:15:52.869 "uuid": "dfabf0e6-3563-4eac-b141-dafbecc3c6f0", 00:15:52.869 "is_configured": true, 00:15:52.869 "data_offset": 0, 00:15:52.869 "data_size": 65536 00:15:52.869 }, 00:15:52.869 { 00:15:52.869 "name": "BaseBdev4", 00:15:52.869 "uuid": "8c57996f-0324-4a4f-bca1-4ca89fd4ad10", 00:15:52.869 "is_configured": true, 00:15:52.869 "data_offset": 0, 00:15:52.869 "data_size": 65536 00:15:52.869 } 00:15:52.869 ] 00:15:52.869 }' 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.869 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.438 [2024-11-27 04:37:40.945962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.438 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:53.438 "name": "Existed_Raid", 00:15:53.438 "aliases": [ 00:15:53.438 "ebae97af-c97f-4646-a023-d1fc752b3077" 00:15:53.438 ], 00:15:53.438 "product_name": "Raid Volume", 00:15:53.438 "block_size": 512, 00:15:53.438 "num_blocks": 262144, 00:15:53.438 "uuid": "ebae97af-c97f-4646-a023-d1fc752b3077", 00:15:53.438 "assigned_rate_limits": { 00:15:53.438 "rw_ios_per_sec": 0, 00:15:53.438 "rw_mbytes_per_sec": 0, 00:15:53.438 "r_mbytes_per_sec": 0, 00:15:53.438 "w_mbytes_per_sec": 0 00:15:53.438 }, 00:15:53.438 "claimed": false, 00:15:53.438 "zoned": false, 00:15:53.438 "supported_io_types": { 00:15:53.438 "read": true, 00:15:53.438 "write": true, 00:15:53.438 "unmap": true, 00:15:53.438 "flush": true, 00:15:53.438 "reset": true, 00:15:53.438 "nvme_admin": false, 00:15:53.438 "nvme_io": false, 00:15:53.438 "nvme_io_md": false, 00:15:53.438 "write_zeroes": true, 00:15:53.438 "zcopy": false, 00:15:53.438 "get_zone_info": false, 00:15:53.438 "zone_management": false, 00:15:53.438 "zone_append": false, 00:15:53.438 "compare": false, 00:15:53.438 "compare_and_write": false, 00:15:53.438 "abort": false, 00:15:53.438 "seek_hole": false, 00:15:53.438 "seek_data": false, 00:15:53.438 "copy": false, 00:15:53.438 "nvme_iov_md": false 00:15:53.438 }, 00:15:53.438 "memory_domains": [ 00:15:53.438 { 00:15:53.438 "dma_device_id": "system", 00:15:53.438 "dma_device_type": 1 00:15:53.438 }, 00:15:53.438 { 00:15:53.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.438 "dma_device_type": 2 00:15:53.438 }, 00:15:53.438 { 00:15:53.438 "dma_device_id": "system", 00:15:53.439 "dma_device_type": 1 00:15:53.439 }, 00:15:53.439 { 00:15:53.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.439 "dma_device_type": 2 00:15:53.439 }, 00:15:53.439 { 00:15:53.439 "dma_device_id": "system", 00:15:53.439 "dma_device_type": 1 00:15:53.439 }, 00:15:53.439 { 00:15:53.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.439 "dma_device_type": 2 00:15:53.439 }, 00:15:53.439 { 00:15:53.439 "dma_device_id": "system", 00:15:53.439 "dma_device_type": 1 00:15:53.439 }, 00:15:53.439 { 00:15:53.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.439 "dma_device_type": 2 00:15:53.439 } 00:15:53.439 ], 00:15:53.439 "driver_specific": { 00:15:53.439 "raid": { 00:15:53.439 "uuid": "ebae97af-c97f-4646-a023-d1fc752b3077", 00:15:53.439 "strip_size_kb": 64, 00:15:53.439 "state": "online", 00:15:53.439 "raid_level": "concat", 00:15:53.439 "superblock": false, 00:15:53.439 "num_base_bdevs": 4, 00:15:53.439 "num_base_bdevs_discovered": 4, 00:15:53.439 "num_base_bdevs_operational": 4, 00:15:53.439 "base_bdevs_list": [ 00:15:53.439 { 00:15:53.439 "name": "BaseBdev1", 00:15:53.439 "uuid": "576be0b2-3c85-4568-aabd-487138c06a69", 00:15:53.439 "is_configured": true, 00:15:53.439 "data_offset": 0, 00:15:53.439 "data_size": 65536 00:15:53.439 }, 00:15:53.439 { 00:15:53.439 "name": "BaseBdev2", 00:15:53.439 "uuid": "c7621343-2b1e-4f9b-856e-fa53ffc4ad65", 00:15:53.439 "is_configured": true, 00:15:53.439 "data_offset": 0, 00:15:53.439 "data_size": 65536 00:15:53.439 }, 00:15:53.439 { 00:15:53.439 "name": "BaseBdev3", 00:15:53.439 "uuid": "dfabf0e6-3563-4eac-b141-dafbecc3c6f0", 00:15:53.439 "is_configured": true, 00:15:53.439 "data_offset": 0, 00:15:53.439 "data_size": 65536 00:15:53.439 }, 00:15:53.439 { 00:15:53.439 "name": "BaseBdev4", 00:15:53.439 "uuid": "8c57996f-0324-4a4f-bca1-4ca89fd4ad10", 00:15:53.439 "is_configured": true, 00:15:53.439 "data_offset": 0, 00:15:53.439 "data_size": 65536 00:15:53.439 } 00:15:53.439 ] 00:15:53.439 } 00:15:53.439 } 00:15:53.439 }' 00:15:53.439 04:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.439 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:53.439 BaseBdev2 00:15:53.439 BaseBdev3 00:15:53.439 BaseBdev4' 00:15:53.439 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.697 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:53.697 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.697 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:53.697 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.697 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.697 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.697 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.698 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.698 [2024-11-27 04:37:41.293659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:53.698 [2024-11-27 04:37:41.293704] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.698 [2024-11-27 04:37:41.293787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.957 "name": "Existed_Raid", 00:15:53.957 "uuid": "ebae97af-c97f-4646-a023-d1fc752b3077", 00:15:53.957 "strip_size_kb": 64, 00:15:53.957 "state": "offline", 00:15:53.957 "raid_level": "concat", 00:15:53.957 "superblock": false, 00:15:53.957 "num_base_bdevs": 4, 00:15:53.957 "num_base_bdevs_discovered": 3, 00:15:53.957 "num_base_bdevs_operational": 3, 00:15:53.957 "base_bdevs_list": [ 00:15:53.957 { 00:15:53.957 "name": null, 00:15:53.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.957 "is_configured": false, 00:15:53.957 "data_offset": 0, 00:15:53.957 "data_size": 65536 00:15:53.957 }, 00:15:53.957 { 00:15:53.957 "name": "BaseBdev2", 00:15:53.957 "uuid": "c7621343-2b1e-4f9b-856e-fa53ffc4ad65", 00:15:53.957 "is_configured": true, 00:15:53.957 "data_offset": 0, 00:15:53.957 "data_size": 65536 00:15:53.957 }, 00:15:53.957 { 00:15:53.957 "name": "BaseBdev3", 00:15:53.957 "uuid": "dfabf0e6-3563-4eac-b141-dafbecc3c6f0", 00:15:53.957 "is_configured": true, 00:15:53.957 "data_offset": 0, 00:15:53.957 "data_size": 65536 00:15:53.957 }, 00:15:53.957 { 00:15:53.957 "name": "BaseBdev4", 00:15:53.957 "uuid": "8c57996f-0324-4a4f-bca1-4ca89fd4ad10", 00:15:53.957 "is_configured": true, 00:15:53.957 "data_offset": 0, 00:15:53.957 "data_size": 65536 00:15:53.957 } 00:15:53.957 ] 00:15:53.957 }' 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.957 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.524 04:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.524 [2024-11-27 04:37:41.966636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.524 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.524 [2024-11-27 04:37:42.114877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:54.782 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.783 [2024-11-27 04:37:42.266138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:54.783 [2024-11-27 04:37:42.266209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.783 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.042 BaseBdev2 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.042 [ 00:15:55.042 { 00:15:55.042 "name": "BaseBdev2", 00:15:55.042 "aliases": [ 00:15:55.042 "a08d3409-2b58-4820-b2b1-65fc86683c7b" 00:15:55.042 ], 00:15:55.042 "product_name": "Malloc disk", 00:15:55.042 "block_size": 512, 00:15:55.042 "num_blocks": 65536, 00:15:55.042 "uuid": "a08d3409-2b58-4820-b2b1-65fc86683c7b", 00:15:55.042 "assigned_rate_limits": { 00:15:55.042 "rw_ios_per_sec": 0, 00:15:55.042 "rw_mbytes_per_sec": 0, 00:15:55.042 "r_mbytes_per_sec": 0, 00:15:55.042 "w_mbytes_per_sec": 0 00:15:55.042 }, 00:15:55.042 "claimed": false, 00:15:55.042 "zoned": false, 00:15:55.042 "supported_io_types": { 00:15:55.042 "read": true, 00:15:55.042 "write": true, 00:15:55.042 "unmap": true, 00:15:55.042 "flush": true, 00:15:55.042 "reset": true, 00:15:55.042 "nvme_admin": false, 00:15:55.042 "nvme_io": false, 00:15:55.042 "nvme_io_md": false, 00:15:55.042 "write_zeroes": true, 00:15:55.042 "zcopy": true, 00:15:55.042 "get_zone_info": false, 00:15:55.042 "zone_management": false, 00:15:55.042 "zone_append": false, 00:15:55.042 "compare": false, 00:15:55.042 "compare_and_write": false, 00:15:55.042 "abort": true, 00:15:55.042 "seek_hole": false, 00:15:55.042 "seek_data": false, 00:15:55.042 "copy": true, 00:15:55.042 "nvme_iov_md": false 00:15:55.042 }, 00:15:55.042 "memory_domains": [ 00:15:55.042 { 00:15:55.042 "dma_device_id": "system", 00:15:55.042 "dma_device_type": 1 00:15:55.042 }, 00:15:55.042 { 00:15:55.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.042 "dma_device_type": 2 00:15:55.042 } 00:15:55.042 ], 00:15:55.042 "driver_specific": {} 00:15:55.042 } 00:15:55.042 ] 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.042 BaseBdev3 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.042 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.042 [ 00:15:55.042 { 00:15:55.042 "name": "BaseBdev3", 00:15:55.042 "aliases": [ 00:15:55.042 "96971dec-253e-41ac-a898-b1dc9b9f6b52" 00:15:55.042 ], 00:15:55.042 "product_name": "Malloc disk", 00:15:55.042 "block_size": 512, 00:15:55.042 "num_blocks": 65536, 00:15:55.042 "uuid": "96971dec-253e-41ac-a898-b1dc9b9f6b52", 00:15:55.042 "assigned_rate_limits": { 00:15:55.042 "rw_ios_per_sec": 0, 00:15:55.042 "rw_mbytes_per_sec": 0, 00:15:55.042 "r_mbytes_per_sec": 0, 00:15:55.042 "w_mbytes_per_sec": 0 00:15:55.042 }, 00:15:55.042 "claimed": false, 00:15:55.042 "zoned": false, 00:15:55.042 "supported_io_types": { 00:15:55.042 "read": true, 00:15:55.042 "write": true, 00:15:55.042 "unmap": true, 00:15:55.042 "flush": true, 00:15:55.042 "reset": true, 00:15:55.042 "nvme_admin": false, 00:15:55.042 "nvme_io": false, 00:15:55.042 "nvme_io_md": false, 00:15:55.042 "write_zeroes": true, 00:15:55.042 "zcopy": true, 00:15:55.042 "get_zone_info": false, 00:15:55.042 "zone_management": false, 00:15:55.042 "zone_append": false, 00:15:55.043 "compare": false, 00:15:55.043 "compare_and_write": false, 00:15:55.043 "abort": true, 00:15:55.043 "seek_hole": false, 00:15:55.043 "seek_data": false, 00:15:55.043 "copy": true, 00:15:55.043 "nvme_iov_md": false 00:15:55.043 }, 00:15:55.043 "memory_domains": [ 00:15:55.043 { 00:15:55.043 "dma_device_id": "system", 00:15:55.043 "dma_device_type": 1 00:15:55.043 }, 00:15:55.043 { 00:15:55.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.043 "dma_device_type": 2 00:15:55.043 } 00:15:55.043 ], 00:15:55.043 "driver_specific": {} 00:15:55.043 } 00:15:55.043 ] 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.043 BaseBdev4 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.043 [ 00:15:55.043 { 00:15:55.043 "name": "BaseBdev4", 00:15:55.043 "aliases": [ 00:15:55.043 "ce5b3857-4f51-4ba6-93de-daefaf55fb58" 00:15:55.043 ], 00:15:55.043 "product_name": "Malloc disk", 00:15:55.043 "block_size": 512, 00:15:55.043 "num_blocks": 65536, 00:15:55.043 "uuid": "ce5b3857-4f51-4ba6-93de-daefaf55fb58", 00:15:55.043 "assigned_rate_limits": { 00:15:55.043 "rw_ios_per_sec": 0, 00:15:55.043 "rw_mbytes_per_sec": 0, 00:15:55.043 "r_mbytes_per_sec": 0, 00:15:55.043 "w_mbytes_per_sec": 0 00:15:55.043 }, 00:15:55.043 "claimed": false, 00:15:55.043 "zoned": false, 00:15:55.043 "supported_io_types": { 00:15:55.043 "read": true, 00:15:55.043 "write": true, 00:15:55.043 "unmap": true, 00:15:55.043 "flush": true, 00:15:55.043 "reset": true, 00:15:55.043 "nvme_admin": false, 00:15:55.043 "nvme_io": false, 00:15:55.043 "nvme_io_md": false, 00:15:55.043 "write_zeroes": true, 00:15:55.043 "zcopy": true, 00:15:55.043 "get_zone_info": false, 00:15:55.043 "zone_management": false, 00:15:55.043 "zone_append": false, 00:15:55.043 "compare": false, 00:15:55.043 "compare_and_write": false, 00:15:55.043 "abort": true, 00:15:55.043 "seek_hole": false, 00:15:55.043 "seek_data": false, 00:15:55.043 "copy": true, 00:15:55.043 "nvme_iov_md": false 00:15:55.043 }, 00:15:55.043 "memory_domains": [ 00:15:55.043 { 00:15:55.043 "dma_device_id": "system", 00:15:55.043 "dma_device_type": 1 00:15:55.043 }, 00:15:55.043 { 00:15:55.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.043 "dma_device_type": 2 00:15:55.043 } 00:15:55.043 ], 00:15:55.043 "driver_specific": {} 00:15:55.043 } 00:15:55.043 ] 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.043 [2024-11-27 04:37:42.627633] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.043 [2024-11-27 04:37:42.627685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.043 [2024-11-27 04:37:42.627718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.043 [2024-11-27 04:37:42.630152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.043 [2024-11-27 04:37:42.630239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.043 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.301 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.301 "name": "Existed_Raid", 00:15:55.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.301 "strip_size_kb": 64, 00:15:55.301 "state": "configuring", 00:15:55.301 "raid_level": "concat", 00:15:55.301 "superblock": false, 00:15:55.302 "num_base_bdevs": 4, 00:15:55.302 "num_base_bdevs_discovered": 3, 00:15:55.302 "num_base_bdevs_operational": 4, 00:15:55.302 "base_bdevs_list": [ 00:15:55.302 { 00:15:55.302 "name": "BaseBdev1", 00:15:55.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.302 "is_configured": false, 00:15:55.302 "data_offset": 0, 00:15:55.302 "data_size": 0 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "name": "BaseBdev2", 00:15:55.302 "uuid": "a08d3409-2b58-4820-b2b1-65fc86683c7b", 00:15:55.302 "is_configured": true, 00:15:55.302 "data_offset": 0, 00:15:55.302 "data_size": 65536 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "name": "BaseBdev3", 00:15:55.302 "uuid": "96971dec-253e-41ac-a898-b1dc9b9f6b52", 00:15:55.302 "is_configured": true, 00:15:55.302 "data_offset": 0, 00:15:55.302 "data_size": 65536 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "name": "BaseBdev4", 00:15:55.302 "uuid": "ce5b3857-4f51-4ba6-93de-daefaf55fb58", 00:15:55.302 "is_configured": true, 00:15:55.302 "data_offset": 0, 00:15:55.302 "data_size": 65536 00:15:55.302 } 00:15:55.302 ] 00:15:55.302 }' 00:15:55.302 04:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.302 04:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.561 [2024-11-27 04:37:43.139800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.561 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.820 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.820 "name": "Existed_Raid", 00:15:55.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.820 "strip_size_kb": 64, 00:15:55.820 "state": "configuring", 00:15:55.820 "raid_level": "concat", 00:15:55.820 "superblock": false, 00:15:55.820 "num_base_bdevs": 4, 00:15:55.820 "num_base_bdevs_discovered": 2, 00:15:55.820 "num_base_bdevs_operational": 4, 00:15:55.820 "base_bdevs_list": [ 00:15:55.820 { 00:15:55.820 "name": "BaseBdev1", 00:15:55.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.820 "is_configured": false, 00:15:55.820 "data_offset": 0, 00:15:55.820 "data_size": 0 00:15:55.820 }, 00:15:55.820 { 00:15:55.820 "name": null, 00:15:55.820 "uuid": "a08d3409-2b58-4820-b2b1-65fc86683c7b", 00:15:55.820 "is_configured": false, 00:15:55.820 "data_offset": 0, 00:15:55.820 "data_size": 65536 00:15:55.820 }, 00:15:55.820 { 00:15:55.820 "name": "BaseBdev3", 00:15:55.820 "uuid": "96971dec-253e-41ac-a898-b1dc9b9f6b52", 00:15:55.820 "is_configured": true, 00:15:55.820 "data_offset": 0, 00:15:55.820 "data_size": 65536 00:15:55.820 }, 00:15:55.820 { 00:15:55.820 "name": "BaseBdev4", 00:15:55.820 "uuid": "ce5b3857-4f51-4ba6-93de-daefaf55fb58", 00:15:55.820 "is_configured": true, 00:15:55.820 "data_offset": 0, 00:15:55.820 "data_size": 65536 00:15:55.820 } 00:15:55.820 ] 00:15:55.820 }' 00:15:55.820 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.820 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.080 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.080 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:56.080 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.080 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.080 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.338 [2024-11-27 04:37:43.745757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.338 BaseBdev1 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.338 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.338 [ 00:15:56.338 { 00:15:56.338 "name": "BaseBdev1", 00:15:56.338 "aliases": [ 00:15:56.338 "d93c3fef-9a21-48fc-bc10-23a6839a28ed" 00:15:56.338 ], 00:15:56.338 "product_name": "Malloc disk", 00:15:56.338 "block_size": 512, 00:15:56.338 "num_blocks": 65536, 00:15:56.338 "uuid": "d93c3fef-9a21-48fc-bc10-23a6839a28ed", 00:15:56.338 "assigned_rate_limits": { 00:15:56.338 "rw_ios_per_sec": 0, 00:15:56.338 "rw_mbytes_per_sec": 0, 00:15:56.338 "r_mbytes_per_sec": 0, 00:15:56.338 "w_mbytes_per_sec": 0 00:15:56.338 }, 00:15:56.338 "claimed": true, 00:15:56.338 "claim_type": "exclusive_write", 00:15:56.338 "zoned": false, 00:15:56.338 "supported_io_types": { 00:15:56.338 "read": true, 00:15:56.338 "write": true, 00:15:56.338 "unmap": true, 00:15:56.338 "flush": true, 00:15:56.338 "reset": true, 00:15:56.338 "nvme_admin": false, 00:15:56.338 "nvme_io": false, 00:15:56.338 "nvme_io_md": false, 00:15:56.338 "write_zeroes": true, 00:15:56.338 "zcopy": true, 00:15:56.338 "get_zone_info": false, 00:15:56.338 "zone_management": false, 00:15:56.338 "zone_append": false, 00:15:56.338 "compare": false, 00:15:56.338 "compare_and_write": false, 00:15:56.338 "abort": true, 00:15:56.338 "seek_hole": false, 00:15:56.338 "seek_data": false, 00:15:56.338 "copy": true, 00:15:56.338 "nvme_iov_md": false 00:15:56.338 }, 00:15:56.338 "memory_domains": [ 00:15:56.338 { 00:15:56.338 "dma_device_id": "system", 00:15:56.338 "dma_device_type": 1 00:15:56.338 }, 00:15:56.338 { 00:15:56.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.338 "dma_device_type": 2 00:15:56.338 } 00:15:56.339 ], 00:15:56.339 "driver_specific": {} 00:15:56.339 } 00:15:56.339 ] 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.339 "name": "Existed_Raid", 00:15:56.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.339 "strip_size_kb": 64, 00:15:56.339 "state": "configuring", 00:15:56.339 "raid_level": "concat", 00:15:56.339 "superblock": false, 00:15:56.339 "num_base_bdevs": 4, 00:15:56.339 "num_base_bdevs_discovered": 3, 00:15:56.339 "num_base_bdevs_operational": 4, 00:15:56.339 "base_bdevs_list": [ 00:15:56.339 { 00:15:56.339 "name": "BaseBdev1", 00:15:56.339 "uuid": "d93c3fef-9a21-48fc-bc10-23a6839a28ed", 00:15:56.339 "is_configured": true, 00:15:56.339 "data_offset": 0, 00:15:56.339 "data_size": 65536 00:15:56.339 }, 00:15:56.339 { 00:15:56.339 "name": null, 00:15:56.339 "uuid": "a08d3409-2b58-4820-b2b1-65fc86683c7b", 00:15:56.339 "is_configured": false, 00:15:56.339 "data_offset": 0, 00:15:56.339 "data_size": 65536 00:15:56.339 }, 00:15:56.339 { 00:15:56.339 "name": "BaseBdev3", 00:15:56.339 "uuid": "96971dec-253e-41ac-a898-b1dc9b9f6b52", 00:15:56.339 "is_configured": true, 00:15:56.339 "data_offset": 0, 00:15:56.339 "data_size": 65536 00:15:56.339 }, 00:15:56.339 { 00:15:56.339 "name": "BaseBdev4", 00:15:56.339 "uuid": "ce5b3857-4f51-4ba6-93de-daefaf55fb58", 00:15:56.339 "is_configured": true, 00:15:56.339 "data_offset": 0, 00:15:56.339 "data_size": 65536 00:15:56.339 } 00:15:56.339 ] 00:15:56.339 }' 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.339 04:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.905 [2024-11-27 04:37:44.366056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.905 "name": "Existed_Raid", 00:15:56.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.905 "strip_size_kb": 64, 00:15:56.905 "state": "configuring", 00:15:56.905 "raid_level": "concat", 00:15:56.905 "superblock": false, 00:15:56.905 "num_base_bdevs": 4, 00:15:56.905 "num_base_bdevs_discovered": 2, 00:15:56.905 "num_base_bdevs_operational": 4, 00:15:56.905 "base_bdevs_list": [ 00:15:56.905 { 00:15:56.905 "name": "BaseBdev1", 00:15:56.905 "uuid": "d93c3fef-9a21-48fc-bc10-23a6839a28ed", 00:15:56.905 "is_configured": true, 00:15:56.905 "data_offset": 0, 00:15:56.905 "data_size": 65536 00:15:56.905 }, 00:15:56.905 { 00:15:56.905 "name": null, 00:15:56.905 "uuid": "a08d3409-2b58-4820-b2b1-65fc86683c7b", 00:15:56.905 "is_configured": false, 00:15:56.905 "data_offset": 0, 00:15:56.905 "data_size": 65536 00:15:56.905 }, 00:15:56.905 { 00:15:56.905 "name": null, 00:15:56.905 "uuid": "96971dec-253e-41ac-a898-b1dc9b9f6b52", 00:15:56.905 "is_configured": false, 00:15:56.905 "data_offset": 0, 00:15:56.905 "data_size": 65536 00:15:56.905 }, 00:15:56.905 { 00:15:56.905 "name": "BaseBdev4", 00:15:56.905 "uuid": "ce5b3857-4f51-4ba6-93de-daefaf55fb58", 00:15:56.905 "is_configured": true, 00:15:56.905 "data_offset": 0, 00:15:56.905 "data_size": 65536 00:15:56.905 } 00:15:56.905 ] 00:15:56.905 }' 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.905 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.473 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:57.473 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.473 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.474 [2024-11-27 04:37:44.942207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.474 04:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.474 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.474 "name": "Existed_Raid", 00:15:57.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.474 "strip_size_kb": 64, 00:15:57.474 "state": "configuring", 00:15:57.474 "raid_level": "concat", 00:15:57.474 "superblock": false, 00:15:57.474 "num_base_bdevs": 4, 00:15:57.474 "num_base_bdevs_discovered": 3, 00:15:57.474 "num_base_bdevs_operational": 4, 00:15:57.474 "base_bdevs_list": [ 00:15:57.474 { 00:15:57.474 "name": "BaseBdev1", 00:15:57.474 "uuid": "d93c3fef-9a21-48fc-bc10-23a6839a28ed", 00:15:57.474 "is_configured": true, 00:15:57.474 "data_offset": 0, 00:15:57.474 "data_size": 65536 00:15:57.474 }, 00:15:57.474 { 00:15:57.474 "name": null, 00:15:57.474 "uuid": "a08d3409-2b58-4820-b2b1-65fc86683c7b", 00:15:57.474 "is_configured": false, 00:15:57.474 "data_offset": 0, 00:15:57.474 "data_size": 65536 00:15:57.474 }, 00:15:57.474 { 00:15:57.474 "name": "BaseBdev3", 00:15:57.474 "uuid": "96971dec-253e-41ac-a898-b1dc9b9f6b52", 00:15:57.474 "is_configured": true, 00:15:57.474 "data_offset": 0, 00:15:57.474 "data_size": 65536 00:15:57.474 }, 00:15:57.474 { 00:15:57.474 "name": "BaseBdev4", 00:15:57.474 "uuid": "ce5b3857-4f51-4ba6-93de-daefaf55fb58", 00:15:57.474 "is_configured": true, 00:15:57.474 "data_offset": 0, 00:15:57.474 "data_size": 65536 00:15:57.474 } 00:15:57.474 ] 00:15:57.474 }' 00:15:57.474 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.474 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.043 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.043 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:58.043 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.043 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.044 [2024-11-27 04:37:45.534363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.044 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.303 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.303 "name": "Existed_Raid", 00:15:58.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.303 "strip_size_kb": 64, 00:15:58.303 "state": "configuring", 00:15:58.303 "raid_level": "concat", 00:15:58.303 "superblock": false, 00:15:58.303 "num_base_bdevs": 4, 00:15:58.303 "num_base_bdevs_discovered": 2, 00:15:58.303 "num_base_bdevs_operational": 4, 00:15:58.303 "base_bdevs_list": [ 00:15:58.303 { 00:15:58.303 "name": null, 00:15:58.303 "uuid": "d93c3fef-9a21-48fc-bc10-23a6839a28ed", 00:15:58.303 "is_configured": false, 00:15:58.303 "data_offset": 0, 00:15:58.303 "data_size": 65536 00:15:58.303 }, 00:15:58.303 { 00:15:58.303 "name": null, 00:15:58.303 "uuid": "a08d3409-2b58-4820-b2b1-65fc86683c7b", 00:15:58.303 "is_configured": false, 00:15:58.303 "data_offset": 0, 00:15:58.303 "data_size": 65536 00:15:58.303 }, 00:15:58.303 { 00:15:58.303 "name": "BaseBdev3", 00:15:58.303 "uuid": "96971dec-253e-41ac-a898-b1dc9b9f6b52", 00:15:58.303 "is_configured": true, 00:15:58.303 "data_offset": 0, 00:15:58.303 "data_size": 65536 00:15:58.303 }, 00:15:58.303 { 00:15:58.303 "name": "BaseBdev4", 00:15:58.303 "uuid": "ce5b3857-4f51-4ba6-93de-daefaf55fb58", 00:15:58.303 "is_configured": true, 00:15:58.303 "data_offset": 0, 00:15:58.303 "data_size": 65536 00:15:58.303 } 00:15:58.303 ] 00:15:58.303 }' 00:15:58.303 04:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.303 04:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.561 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.561 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.561 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.561 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:58.561 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.819 [2024-11-27 04:37:46.206934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.819 "name": "Existed_Raid", 00:15:58.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.819 "strip_size_kb": 64, 00:15:58.819 "state": "configuring", 00:15:58.819 "raid_level": "concat", 00:15:58.819 "superblock": false, 00:15:58.819 "num_base_bdevs": 4, 00:15:58.819 "num_base_bdevs_discovered": 3, 00:15:58.819 "num_base_bdevs_operational": 4, 00:15:58.819 "base_bdevs_list": [ 00:15:58.819 { 00:15:58.819 "name": null, 00:15:58.819 "uuid": "d93c3fef-9a21-48fc-bc10-23a6839a28ed", 00:15:58.819 "is_configured": false, 00:15:58.819 "data_offset": 0, 00:15:58.819 "data_size": 65536 00:15:58.819 }, 00:15:58.819 { 00:15:58.819 "name": "BaseBdev2", 00:15:58.819 "uuid": "a08d3409-2b58-4820-b2b1-65fc86683c7b", 00:15:58.819 "is_configured": true, 00:15:58.819 "data_offset": 0, 00:15:58.819 "data_size": 65536 00:15:58.819 }, 00:15:58.819 { 00:15:58.819 "name": "BaseBdev3", 00:15:58.819 "uuid": "96971dec-253e-41ac-a898-b1dc9b9f6b52", 00:15:58.819 "is_configured": true, 00:15:58.819 "data_offset": 0, 00:15:58.819 "data_size": 65536 00:15:58.819 }, 00:15:58.819 { 00:15:58.819 "name": "BaseBdev4", 00:15:58.819 "uuid": "ce5b3857-4f51-4ba6-93de-daefaf55fb58", 00:15:58.819 "is_configured": true, 00:15:58.819 "data_offset": 0, 00:15:58.819 "data_size": 65536 00:15:58.819 } 00:15:58.819 ] 00:15:58.819 }' 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.819 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d93c3fef-9a21-48fc-bc10-23a6839a28ed 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.387 [2024-11-27 04:37:46.889991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:59.387 [2024-11-27 04:37:46.890064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:59.387 [2024-11-27 04:37:46.890076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:59.387 [2024-11-27 04:37:46.890417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:59.387 [2024-11-27 04:37:46.890596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:59.387 [2024-11-27 04:37:46.890616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:59.387 [2024-11-27 04:37:46.890928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.387 NewBaseBdev 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:59.387 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.388 [ 00:15:59.388 { 00:15:59.388 "name": "NewBaseBdev", 00:15:59.388 "aliases": [ 00:15:59.388 "d93c3fef-9a21-48fc-bc10-23a6839a28ed" 00:15:59.388 ], 00:15:59.388 "product_name": "Malloc disk", 00:15:59.388 "block_size": 512, 00:15:59.388 "num_blocks": 65536, 00:15:59.388 "uuid": "d93c3fef-9a21-48fc-bc10-23a6839a28ed", 00:15:59.388 "assigned_rate_limits": { 00:15:59.388 "rw_ios_per_sec": 0, 00:15:59.388 "rw_mbytes_per_sec": 0, 00:15:59.388 "r_mbytes_per_sec": 0, 00:15:59.388 "w_mbytes_per_sec": 0 00:15:59.388 }, 00:15:59.388 "claimed": true, 00:15:59.388 "claim_type": "exclusive_write", 00:15:59.388 "zoned": false, 00:15:59.388 "supported_io_types": { 00:15:59.388 "read": true, 00:15:59.388 "write": true, 00:15:59.388 "unmap": true, 00:15:59.388 "flush": true, 00:15:59.388 "reset": true, 00:15:59.388 "nvme_admin": false, 00:15:59.388 "nvme_io": false, 00:15:59.388 "nvme_io_md": false, 00:15:59.388 "write_zeroes": true, 00:15:59.388 "zcopy": true, 00:15:59.388 "get_zone_info": false, 00:15:59.388 "zone_management": false, 00:15:59.388 "zone_append": false, 00:15:59.388 "compare": false, 00:15:59.388 "compare_and_write": false, 00:15:59.388 "abort": true, 00:15:59.388 "seek_hole": false, 00:15:59.388 "seek_data": false, 00:15:59.388 "copy": true, 00:15:59.388 "nvme_iov_md": false 00:15:59.388 }, 00:15:59.388 "memory_domains": [ 00:15:59.388 { 00:15:59.388 "dma_device_id": "system", 00:15:59.388 "dma_device_type": 1 00:15:59.388 }, 00:15:59.388 { 00:15:59.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.388 "dma_device_type": 2 00:15:59.388 } 00:15:59.388 ], 00:15:59.388 "driver_specific": {} 00:15:59.388 } 00:15:59.388 ] 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.388 "name": "Existed_Raid", 00:15:59.388 "uuid": "fe61ae80-c66a-4712-9488-3ad94d8454eb", 00:15:59.388 "strip_size_kb": 64, 00:15:59.388 "state": "online", 00:15:59.388 "raid_level": "concat", 00:15:59.388 "superblock": false, 00:15:59.388 "num_base_bdevs": 4, 00:15:59.388 "num_base_bdevs_discovered": 4, 00:15:59.388 "num_base_bdevs_operational": 4, 00:15:59.388 "base_bdevs_list": [ 00:15:59.388 { 00:15:59.388 "name": "NewBaseBdev", 00:15:59.388 "uuid": "d93c3fef-9a21-48fc-bc10-23a6839a28ed", 00:15:59.388 "is_configured": true, 00:15:59.388 "data_offset": 0, 00:15:59.388 "data_size": 65536 00:15:59.388 }, 00:15:59.388 { 00:15:59.388 "name": "BaseBdev2", 00:15:59.388 "uuid": "a08d3409-2b58-4820-b2b1-65fc86683c7b", 00:15:59.388 "is_configured": true, 00:15:59.388 "data_offset": 0, 00:15:59.388 "data_size": 65536 00:15:59.388 }, 00:15:59.388 { 00:15:59.388 "name": "BaseBdev3", 00:15:59.388 "uuid": "96971dec-253e-41ac-a898-b1dc9b9f6b52", 00:15:59.388 "is_configured": true, 00:15:59.388 "data_offset": 0, 00:15:59.388 "data_size": 65536 00:15:59.388 }, 00:15:59.388 { 00:15:59.388 "name": "BaseBdev4", 00:15:59.388 "uuid": "ce5b3857-4f51-4ba6-93de-daefaf55fb58", 00:15:59.388 "is_configured": true, 00:15:59.388 "data_offset": 0, 00:15:59.388 "data_size": 65536 00:15:59.388 } 00:15:59.388 ] 00:15:59.388 }' 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.388 04:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.955 [2024-11-27 04:37:47.478666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.955 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.955 "name": "Existed_Raid", 00:15:59.955 "aliases": [ 00:15:59.955 "fe61ae80-c66a-4712-9488-3ad94d8454eb" 00:15:59.955 ], 00:15:59.955 "product_name": "Raid Volume", 00:15:59.955 "block_size": 512, 00:15:59.955 "num_blocks": 262144, 00:15:59.955 "uuid": "fe61ae80-c66a-4712-9488-3ad94d8454eb", 00:15:59.955 "assigned_rate_limits": { 00:15:59.955 "rw_ios_per_sec": 0, 00:15:59.955 "rw_mbytes_per_sec": 0, 00:15:59.955 "r_mbytes_per_sec": 0, 00:15:59.955 "w_mbytes_per_sec": 0 00:15:59.955 }, 00:15:59.955 "claimed": false, 00:15:59.955 "zoned": false, 00:15:59.955 "supported_io_types": { 00:15:59.955 "read": true, 00:15:59.955 "write": true, 00:15:59.955 "unmap": true, 00:15:59.955 "flush": true, 00:15:59.955 "reset": true, 00:15:59.955 "nvme_admin": false, 00:15:59.955 "nvme_io": false, 00:15:59.955 "nvme_io_md": false, 00:15:59.955 "write_zeroes": true, 00:15:59.955 "zcopy": false, 00:15:59.955 "get_zone_info": false, 00:15:59.955 "zone_management": false, 00:15:59.955 "zone_append": false, 00:15:59.955 "compare": false, 00:15:59.955 "compare_and_write": false, 00:15:59.955 "abort": false, 00:15:59.955 "seek_hole": false, 00:15:59.955 "seek_data": false, 00:15:59.955 "copy": false, 00:15:59.955 "nvme_iov_md": false 00:15:59.955 }, 00:15:59.955 "memory_domains": [ 00:15:59.955 { 00:15:59.955 "dma_device_id": "system", 00:15:59.955 "dma_device_type": 1 00:15:59.955 }, 00:15:59.955 { 00:15:59.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.955 "dma_device_type": 2 00:15:59.955 }, 00:15:59.955 { 00:15:59.955 "dma_device_id": "system", 00:15:59.955 "dma_device_type": 1 00:15:59.955 }, 00:15:59.955 { 00:15:59.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.955 "dma_device_type": 2 00:15:59.955 }, 00:15:59.955 { 00:15:59.955 "dma_device_id": "system", 00:15:59.955 "dma_device_type": 1 00:15:59.955 }, 00:15:59.955 { 00:15:59.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.955 "dma_device_type": 2 00:15:59.955 }, 00:15:59.955 { 00:15:59.955 "dma_device_id": "system", 00:15:59.955 "dma_device_type": 1 00:15:59.955 }, 00:15:59.955 { 00:15:59.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.955 "dma_device_type": 2 00:15:59.955 } 00:15:59.955 ], 00:15:59.955 "driver_specific": { 00:15:59.955 "raid": { 00:15:59.955 "uuid": "fe61ae80-c66a-4712-9488-3ad94d8454eb", 00:15:59.956 "strip_size_kb": 64, 00:15:59.956 "state": "online", 00:15:59.956 "raid_level": "concat", 00:15:59.956 "superblock": false, 00:15:59.956 "num_base_bdevs": 4, 00:15:59.956 "num_base_bdevs_discovered": 4, 00:15:59.956 "num_base_bdevs_operational": 4, 00:15:59.956 "base_bdevs_list": [ 00:15:59.956 { 00:15:59.956 "name": "NewBaseBdev", 00:15:59.956 "uuid": "d93c3fef-9a21-48fc-bc10-23a6839a28ed", 00:15:59.956 "is_configured": true, 00:15:59.956 "data_offset": 0, 00:15:59.956 "data_size": 65536 00:15:59.956 }, 00:15:59.956 { 00:15:59.956 "name": "BaseBdev2", 00:15:59.956 "uuid": "a08d3409-2b58-4820-b2b1-65fc86683c7b", 00:15:59.956 "is_configured": true, 00:15:59.956 "data_offset": 0, 00:15:59.956 "data_size": 65536 00:15:59.956 }, 00:15:59.956 { 00:15:59.956 "name": "BaseBdev3", 00:15:59.956 "uuid": "96971dec-253e-41ac-a898-b1dc9b9f6b52", 00:15:59.956 "is_configured": true, 00:15:59.956 "data_offset": 0, 00:15:59.956 "data_size": 65536 00:15:59.956 }, 00:15:59.956 { 00:15:59.956 "name": "BaseBdev4", 00:15:59.956 "uuid": "ce5b3857-4f51-4ba6-93de-daefaf55fb58", 00:15:59.956 "is_configured": true, 00:15:59.956 "data_offset": 0, 00:15:59.956 "data_size": 65536 00:15:59.956 } 00:15:59.956 ] 00:15:59.956 } 00:15:59.956 } 00:15:59.956 }' 00:15:59.956 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:00.215 BaseBdev2 00:16:00.215 BaseBdev3 00:16:00.215 BaseBdev4' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.215 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.552 [2024-11-27 04:37:47.850322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.552 [2024-11-27 04:37:47.850361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.552 [2024-11-27 04:37:47.850465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.552 [2024-11-27 04:37:47.850563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.552 [2024-11-27 04:37:47.850582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71502 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71502 ']' 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71502 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71502 00:16:00.552 killing process with pid 71502 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71502' 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71502 00:16:00.552 [2024-11-27 04:37:47.892626] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.552 04:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71502 00:16:00.829 [2024-11-27 04:37:48.262698] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.762 ************************************ 00:16:01.762 END TEST raid_state_function_test 00:16:01.762 ************************************ 00:16:01.762 04:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:01.762 00:16:01.762 real 0m13.060s 00:16:01.762 user 0m21.633s 00:16:01.762 sys 0m1.862s 00:16:01.762 04:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.762 04:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.020 04:37:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:02.020 04:37:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:02.020 04:37:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.020 04:37:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.020 ************************************ 00:16:02.020 START TEST raid_state_function_test_sb 00:16:02.020 ************************************ 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72194 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72194' 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:02.020 Process raid pid: 72194 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72194 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72194 ']' 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.020 04:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.020 [2024-11-27 04:37:49.534828] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:16:02.020 [2024-11-27 04:37:49.535301] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.278 [2024-11-27 04:37:49.726472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.278 [2024-11-27 04:37:49.878704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.537 [2024-11-27 04:37:50.137910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.537 [2024-11-27 04:37:50.137957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.105 [2024-11-27 04:37:50.587912] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.105 [2024-11-27 04:37:50.588133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.105 [2024-11-27 04:37:50.588162] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.105 [2024-11-27 04:37:50.588192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.105 [2024-11-27 04:37:50.588203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.105 [2024-11-27 04:37:50.588217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.105 [2024-11-27 04:37:50.588227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.105 [2024-11-27 04:37:50.588241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.105 "name": "Existed_Raid", 00:16:03.105 "uuid": "f9a42aad-659a-451d-aaee-8c037c060140", 00:16:03.105 "strip_size_kb": 64, 00:16:03.105 "state": "configuring", 00:16:03.105 "raid_level": "concat", 00:16:03.105 "superblock": true, 00:16:03.105 "num_base_bdevs": 4, 00:16:03.105 "num_base_bdevs_discovered": 0, 00:16:03.105 "num_base_bdevs_operational": 4, 00:16:03.105 "base_bdevs_list": [ 00:16:03.105 { 00:16:03.105 "name": "BaseBdev1", 00:16:03.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.105 "is_configured": false, 00:16:03.105 "data_offset": 0, 00:16:03.105 "data_size": 0 00:16:03.105 }, 00:16:03.105 { 00:16:03.105 "name": "BaseBdev2", 00:16:03.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.105 "is_configured": false, 00:16:03.105 "data_offset": 0, 00:16:03.105 "data_size": 0 00:16:03.105 }, 00:16:03.105 { 00:16:03.105 "name": "BaseBdev3", 00:16:03.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.105 "is_configured": false, 00:16:03.105 "data_offset": 0, 00:16:03.105 "data_size": 0 00:16:03.105 }, 00:16:03.105 { 00:16:03.105 "name": "BaseBdev4", 00:16:03.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.105 "is_configured": false, 00:16:03.105 "data_offset": 0, 00:16:03.105 "data_size": 0 00:16:03.105 } 00:16:03.105 ] 00:16:03.105 }' 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.105 04:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.673 [2024-11-27 04:37:51.135948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.673 [2024-11-27 04:37:51.135998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.673 [2024-11-27 04:37:51.143959] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.673 [2024-11-27 04:37:51.144587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.673 [2024-11-27 04:37:51.144619] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.673 [2024-11-27 04:37:51.144638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.673 [2024-11-27 04:37:51.144648] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.673 [2024-11-27 04:37:51.144662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.673 [2024-11-27 04:37:51.144671] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.673 [2024-11-27 04:37:51.144685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.673 [2024-11-27 04:37:51.190334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.673 BaseBdev1 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.673 [ 00:16:03.673 { 00:16:03.673 "name": "BaseBdev1", 00:16:03.673 "aliases": [ 00:16:03.673 "945164b1-c27d-4e26-9f0f-72fe440409ac" 00:16:03.673 ], 00:16:03.673 "product_name": "Malloc disk", 00:16:03.673 "block_size": 512, 00:16:03.673 "num_blocks": 65536, 00:16:03.673 "uuid": "945164b1-c27d-4e26-9f0f-72fe440409ac", 00:16:03.673 "assigned_rate_limits": { 00:16:03.673 "rw_ios_per_sec": 0, 00:16:03.673 "rw_mbytes_per_sec": 0, 00:16:03.673 "r_mbytes_per_sec": 0, 00:16:03.673 "w_mbytes_per_sec": 0 00:16:03.673 }, 00:16:03.673 "claimed": true, 00:16:03.673 "claim_type": "exclusive_write", 00:16:03.673 "zoned": false, 00:16:03.673 "supported_io_types": { 00:16:03.673 "read": true, 00:16:03.673 "write": true, 00:16:03.673 "unmap": true, 00:16:03.673 "flush": true, 00:16:03.673 "reset": true, 00:16:03.673 "nvme_admin": false, 00:16:03.673 "nvme_io": false, 00:16:03.673 "nvme_io_md": false, 00:16:03.673 "write_zeroes": true, 00:16:03.673 "zcopy": true, 00:16:03.673 "get_zone_info": false, 00:16:03.673 "zone_management": false, 00:16:03.673 "zone_append": false, 00:16:03.673 "compare": false, 00:16:03.673 "compare_and_write": false, 00:16:03.673 "abort": true, 00:16:03.673 "seek_hole": false, 00:16:03.673 "seek_data": false, 00:16:03.673 "copy": true, 00:16:03.673 "nvme_iov_md": false 00:16:03.673 }, 00:16:03.673 "memory_domains": [ 00:16:03.673 { 00:16:03.673 "dma_device_id": "system", 00:16:03.673 "dma_device_type": 1 00:16:03.673 }, 00:16:03.673 { 00:16:03.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.673 "dma_device_type": 2 00:16:03.673 } 00:16:03.673 ], 00:16:03.673 "driver_specific": {} 00:16:03.673 } 00:16:03.673 ] 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.673 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.673 "name": "Existed_Raid", 00:16:03.673 "uuid": "49e6c193-f255-448e-8d58-6ee3dbb6bd34", 00:16:03.673 "strip_size_kb": 64, 00:16:03.673 "state": "configuring", 00:16:03.673 "raid_level": "concat", 00:16:03.673 "superblock": true, 00:16:03.673 "num_base_bdevs": 4, 00:16:03.673 "num_base_bdevs_discovered": 1, 00:16:03.673 "num_base_bdevs_operational": 4, 00:16:03.673 "base_bdevs_list": [ 00:16:03.673 { 00:16:03.673 "name": "BaseBdev1", 00:16:03.673 "uuid": "945164b1-c27d-4e26-9f0f-72fe440409ac", 00:16:03.673 "is_configured": true, 00:16:03.674 "data_offset": 2048, 00:16:03.674 "data_size": 63488 00:16:03.674 }, 00:16:03.674 { 00:16:03.674 "name": "BaseBdev2", 00:16:03.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.674 "is_configured": false, 00:16:03.674 "data_offset": 0, 00:16:03.674 "data_size": 0 00:16:03.674 }, 00:16:03.674 { 00:16:03.674 "name": "BaseBdev3", 00:16:03.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.674 "is_configured": false, 00:16:03.674 "data_offset": 0, 00:16:03.674 "data_size": 0 00:16:03.674 }, 00:16:03.674 { 00:16:03.674 "name": "BaseBdev4", 00:16:03.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.674 "is_configured": false, 00:16:03.674 "data_offset": 0, 00:16:03.674 "data_size": 0 00:16:03.674 } 00:16:03.674 ] 00:16:03.674 }' 00:16:03.674 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.674 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.239 [2024-11-27 04:37:51.742529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.239 [2024-11-27 04:37:51.742603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.239 [2024-11-27 04:37:51.754609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.239 [2024-11-27 04:37:51.757061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.239 [2024-11-27 04:37:51.757255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.239 [2024-11-27 04:37:51.757283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.239 [2024-11-27 04:37:51.757303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.239 [2024-11-27 04:37:51.757314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.239 [2024-11-27 04:37:51.757328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.239 "name": "Existed_Raid", 00:16:04.239 "uuid": "cf8350ac-c971-4daa-b307-647748917c3f", 00:16:04.239 "strip_size_kb": 64, 00:16:04.239 "state": "configuring", 00:16:04.239 "raid_level": "concat", 00:16:04.239 "superblock": true, 00:16:04.239 "num_base_bdevs": 4, 00:16:04.239 "num_base_bdevs_discovered": 1, 00:16:04.239 "num_base_bdevs_operational": 4, 00:16:04.239 "base_bdevs_list": [ 00:16:04.239 { 00:16:04.239 "name": "BaseBdev1", 00:16:04.239 "uuid": "945164b1-c27d-4e26-9f0f-72fe440409ac", 00:16:04.239 "is_configured": true, 00:16:04.239 "data_offset": 2048, 00:16:04.239 "data_size": 63488 00:16:04.239 }, 00:16:04.239 { 00:16:04.239 "name": "BaseBdev2", 00:16:04.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.239 "is_configured": false, 00:16:04.239 "data_offset": 0, 00:16:04.239 "data_size": 0 00:16:04.239 }, 00:16:04.239 { 00:16:04.239 "name": "BaseBdev3", 00:16:04.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.239 "is_configured": false, 00:16:04.239 "data_offset": 0, 00:16:04.239 "data_size": 0 00:16:04.239 }, 00:16:04.239 { 00:16:04.239 "name": "BaseBdev4", 00:16:04.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.239 "is_configured": false, 00:16:04.239 "data_offset": 0, 00:16:04.239 "data_size": 0 00:16:04.239 } 00:16:04.239 ] 00:16:04.239 }' 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.239 04:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.807 [2024-11-27 04:37:52.338870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.807 BaseBdev2 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.807 [ 00:16:04.807 { 00:16:04.807 "name": "BaseBdev2", 00:16:04.807 "aliases": [ 00:16:04.807 "5d74d11d-2f5d-49a5-aa20-3962990b9c98" 00:16:04.807 ], 00:16:04.807 "product_name": "Malloc disk", 00:16:04.807 "block_size": 512, 00:16:04.807 "num_blocks": 65536, 00:16:04.807 "uuid": "5d74d11d-2f5d-49a5-aa20-3962990b9c98", 00:16:04.807 "assigned_rate_limits": { 00:16:04.807 "rw_ios_per_sec": 0, 00:16:04.807 "rw_mbytes_per_sec": 0, 00:16:04.807 "r_mbytes_per_sec": 0, 00:16:04.807 "w_mbytes_per_sec": 0 00:16:04.807 }, 00:16:04.807 "claimed": true, 00:16:04.807 "claim_type": "exclusive_write", 00:16:04.807 "zoned": false, 00:16:04.807 "supported_io_types": { 00:16:04.807 "read": true, 00:16:04.807 "write": true, 00:16:04.807 "unmap": true, 00:16:04.807 "flush": true, 00:16:04.807 "reset": true, 00:16:04.807 "nvme_admin": false, 00:16:04.807 "nvme_io": false, 00:16:04.807 "nvme_io_md": false, 00:16:04.807 "write_zeroes": true, 00:16:04.807 "zcopy": true, 00:16:04.807 "get_zone_info": false, 00:16:04.807 "zone_management": false, 00:16:04.807 "zone_append": false, 00:16:04.807 "compare": false, 00:16:04.807 "compare_and_write": false, 00:16:04.807 "abort": true, 00:16:04.807 "seek_hole": false, 00:16:04.807 "seek_data": false, 00:16:04.807 "copy": true, 00:16:04.807 "nvme_iov_md": false 00:16:04.807 }, 00:16:04.807 "memory_domains": [ 00:16:04.807 { 00:16:04.807 "dma_device_id": "system", 00:16:04.807 "dma_device_type": 1 00:16:04.807 }, 00:16:04.807 { 00:16:04.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.807 "dma_device_type": 2 00:16:04.807 } 00:16:04.807 ], 00:16:04.807 "driver_specific": {} 00:16:04.807 } 00:16:04.807 ] 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.807 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.808 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.808 "name": "Existed_Raid", 00:16:04.808 "uuid": "cf8350ac-c971-4daa-b307-647748917c3f", 00:16:04.808 "strip_size_kb": 64, 00:16:04.808 "state": "configuring", 00:16:04.808 "raid_level": "concat", 00:16:04.808 "superblock": true, 00:16:04.808 "num_base_bdevs": 4, 00:16:04.808 "num_base_bdevs_discovered": 2, 00:16:04.808 "num_base_bdevs_operational": 4, 00:16:04.808 "base_bdevs_list": [ 00:16:04.808 { 00:16:04.808 "name": "BaseBdev1", 00:16:04.808 "uuid": "945164b1-c27d-4e26-9f0f-72fe440409ac", 00:16:04.808 "is_configured": true, 00:16:04.808 "data_offset": 2048, 00:16:04.808 "data_size": 63488 00:16:04.808 }, 00:16:04.808 { 00:16:04.808 "name": "BaseBdev2", 00:16:04.808 "uuid": "5d74d11d-2f5d-49a5-aa20-3962990b9c98", 00:16:04.808 "is_configured": true, 00:16:04.808 "data_offset": 2048, 00:16:04.808 "data_size": 63488 00:16:04.808 }, 00:16:04.808 { 00:16:04.808 "name": "BaseBdev3", 00:16:04.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.808 "is_configured": false, 00:16:04.808 "data_offset": 0, 00:16:04.808 "data_size": 0 00:16:04.808 }, 00:16:04.808 { 00:16:04.808 "name": "BaseBdev4", 00:16:04.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.808 "is_configured": false, 00:16:04.808 "data_offset": 0, 00:16:04.808 "data_size": 0 00:16:04.808 } 00:16:04.808 ] 00:16:04.808 }' 00:16:05.068 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.068 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.327 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.327 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.327 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.587 [2024-11-27 04:37:52.949284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.587 BaseBdev3 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.587 [ 00:16:05.587 { 00:16:05.587 "name": "BaseBdev3", 00:16:05.587 "aliases": [ 00:16:05.587 "3c07cb8a-e78e-430c-9087-d12842872be4" 00:16:05.587 ], 00:16:05.587 "product_name": "Malloc disk", 00:16:05.587 "block_size": 512, 00:16:05.587 "num_blocks": 65536, 00:16:05.587 "uuid": "3c07cb8a-e78e-430c-9087-d12842872be4", 00:16:05.587 "assigned_rate_limits": { 00:16:05.587 "rw_ios_per_sec": 0, 00:16:05.587 "rw_mbytes_per_sec": 0, 00:16:05.587 "r_mbytes_per_sec": 0, 00:16:05.587 "w_mbytes_per_sec": 0 00:16:05.587 }, 00:16:05.587 "claimed": true, 00:16:05.587 "claim_type": "exclusive_write", 00:16:05.587 "zoned": false, 00:16:05.587 "supported_io_types": { 00:16:05.587 "read": true, 00:16:05.587 "write": true, 00:16:05.587 "unmap": true, 00:16:05.587 "flush": true, 00:16:05.587 "reset": true, 00:16:05.587 "nvme_admin": false, 00:16:05.587 "nvme_io": false, 00:16:05.587 "nvme_io_md": false, 00:16:05.587 "write_zeroes": true, 00:16:05.587 "zcopy": true, 00:16:05.587 "get_zone_info": false, 00:16:05.587 "zone_management": false, 00:16:05.587 "zone_append": false, 00:16:05.587 "compare": false, 00:16:05.587 "compare_and_write": false, 00:16:05.587 "abort": true, 00:16:05.587 "seek_hole": false, 00:16:05.587 "seek_data": false, 00:16:05.587 "copy": true, 00:16:05.587 "nvme_iov_md": false 00:16:05.587 }, 00:16:05.587 "memory_domains": [ 00:16:05.587 { 00:16:05.587 "dma_device_id": "system", 00:16:05.587 "dma_device_type": 1 00:16:05.587 }, 00:16:05.587 { 00:16:05.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.587 "dma_device_type": 2 00:16:05.587 } 00:16:05.587 ], 00:16:05.587 "driver_specific": {} 00:16:05.587 } 00:16:05.587 ] 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.587 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.588 04:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.588 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.588 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.588 "name": "Existed_Raid", 00:16:05.588 "uuid": "cf8350ac-c971-4daa-b307-647748917c3f", 00:16:05.588 "strip_size_kb": 64, 00:16:05.588 "state": "configuring", 00:16:05.588 "raid_level": "concat", 00:16:05.588 "superblock": true, 00:16:05.588 "num_base_bdevs": 4, 00:16:05.588 "num_base_bdevs_discovered": 3, 00:16:05.588 "num_base_bdevs_operational": 4, 00:16:05.588 "base_bdevs_list": [ 00:16:05.588 { 00:16:05.588 "name": "BaseBdev1", 00:16:05.588 "uuid": "945164b1-c27d-4e26-9f0f-72fe440409ac", 00:16:05.588 "is_configured": true, 00:16:05.588 "data_offset": 2048, 00:16:05.588 "data_size": 63488 00:16:05.588 }, 00:16:05.588 { 00:16:05.588 "name": "BaseBdev2", 00:16:05.588 "uuid": "5d74d11d-2f5d-49a5-aa20-3962990b9c98", 00:16:05.588 "is_configured": true, 00:16:05.588 "data_offset": 2048, 00:16:05.588 "data_size": 63488 00:16:05.588 }, 00:16:05.588 { 00:16:05.588 "name": "BaseBdev3", 00:16:05.588 "uuid": "3c07cb8a-e78e-430c-9087-d12842872be4", 00:16:05.588 "is_configured": true, 00:16:05.588 "data_offset": 2048, 00:16:05.588 "data_size": 63488 00:16:05.588 }, 00:16:05.588 { 00:16:05.588 "name": "BaseBdev4", 00:16:05.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.588 "is_configured": false, 00:16:05.588 "data_offset": 0, 00:16:05.588 "data_size": 0 00:16:05.588 } 00:16:05.588 ] 00:16:05.588 }' 00:16:05.588 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.588 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.155 [2024-11-27 04:37:53.547981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.155 [2024-11-27 04:37:53.548322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:06.155 [2024-11-27 04:37:53.548342] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:06.155 BaseBdev4 00:16:06.155 [2024-11-27 04:37:53.548684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:06.155 [2024-11-27 04:37:53.548900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:06.155 [2024-11-27 04:37:53.548922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:06.155 [2024-11-27 04:37:53.549101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.155 [ 00:16:06.155 { 00:16:06.155 "name": "BaseBdev4", 00:16:06.155 "aliases": [ 00:16:06.155 "6183a513-4924-4216-8acf-d7aeb81fbc46" 00:16:06.155 ], 00:16:06.155 "product_name": "Malloc disk", 00:16:06.155 "block_size": 512, 00:16:06.155 "num_blocks": 65536, 00:16:06.155 "uuid": "6183a513-4924-4216-8acf-d7aeb81fbc46", 00:16:06.155 "assigned_rate_limits": { 00:16:06.155 "rw_ios_per_sec": 0, 00:16:06.155 "rw_mbytes_per_sec": 0, 00:16:06.155 "r_mbytes_per_sec": 0, 00:16:06.155 "w_mbytes_per_sec": 0 00:16:06.155 }, 00:16:06.155 "claimed": true, 00:16:06.155 "claim_type": "exclusive_write", 00:16:06.155 "zoned": false, 00:16:06.155 "supported_io_types": { 00:16:06.155 "read": true, 00:16:06.155 "write": true, 00:16:06.155 "unmap": true, 00:16:06.155 "flush": true, 00:16:06.155 "reset": true, 00:16:06.155 "nvme_admin": false, 00:16:06.155 "nvme_io": false, 00:16:06.155 "nvme_io_md": false, 00:16:06.155 "write_zeroes": true, 00:16:06.155 "zcopy": true, 00:16:06.155 "get_zone_info": false, 00:16:06.155 "zone_management": false, 00:16:06.155 "zone_append": false, 00:16:06.155 "compare": false, 00:16:06.155 "compare_and_write": false, 00:16:06.155 "abort": true, 00:16:06.155 "seek_hole": false, 00:16:06.155 "seek_data": false, 00:16:06.155 "copy": true, 00:16:06.155 "nvme_iov_md": false 00:16:06.155 }, 00:16:06.155 "memory_domains": [ 00:16:06.155 { 00:16:06.155 "dma_device_id": "system", 00:16:06.155 "dma_device_type": 1 00:16:06.155 }, 00:16:06.155 { 00:16:06.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.155 "dma_device_type": 2 00:16:06.155 } 00:16:06.155 ], 00:16:06.155 "driver_specific": {} 00:16:06.155 } 00:16:06.155 ] 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.155 "name": "Existed_Raid", 00:16:06.155 "uuid": "cf8350ac-c971-4daa-b307-647748917c3f", 00:16:06.155 "strip_size_kb": 64, 00:16:06.155 "state": "online", 00:16:06.155 "raid_level": "concat", 00:16:06.155 "superblock": true, 00:16:06.155 "num_base_bdevs": 4, 00:16:06.155 "num_base_bdevs_discovered": 4, 00:16:06.155 "num_base_bdevs_operational": 4, 00:16:06.155 "base_bdevs_list": [ 00:16:06.155 { 00:16:06.155 "name": "BaseBdev1", 00:16:06.155 "uuid": "945164b1-c27d-4e26-9f0f-72fe440409ac", 00:16:06.155 "is_configured": true, 00:16:06.155 "data_offset": 2048, 00:16:06.155 "data_size": 63488 00:16:06.155 }, 00:16:06.155 { 00:16:06.155 "name": "BaseBdev2", 00:16:06.155 "uuid": "5d74d11d-2f5d-49a5-aa20-3962990b9c98", 00:16:06.155 "is_configured": true, 00:16:06.155 "data_offset": 2048, 00:16:06.155 "data_size": 63488 00:16:06.155 }, 00:16:06.155 { 00:16:06.155 "name": "BaseBdev3", 00:16:06.155 "uuid": "3c07cb8a-e78e-430c-9087-d12842872be4", 00:16:06.155 "is_configured": true, 00:16:06.155 "data_offset": 2048, 00:16:06.155 "data_size": 63488 00:16:06.155 }, 00:16:06.155 { 00:16:06.155 "name": "BaseBdev4", 00:16:06.155 "uuid": "6183a513-4924-4216-8acf-d7aeb81fbc46", 00:16:06.155 "is_configured": true, 00:16:06.155 "data_offset": 2048, 00:16:06.155 "data_size": 63488 00:16:06.155 } 00:16:06.155 ] 00:16:06.155 }' 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.155 04:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.721 [2024-11-27 04:37:54.100631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.721 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:06.721 "name": "Existed_Raid", 00:16:06.721 "aliases": [ 00:16:06.721 "cf8350ac-c971-4daa-b307-647748917c3f" 00:16:06.721 ], 00:16:06.721 "product_name": "Raid Volume", 00:16:06.721 "block_size": 512, 00:16:06.721 "num_blocks": 253952, 00:16:06.721 "uuid": "cf8350ac-c971-4daa-b307-647748917c3f", 00:16:06.721 "assigned_rate_limits": { 00:16:06.721 "rw_ios_per_sec": 0, 00:16:06.721 "rw_mbytes_per_sec": 0, 00:16:06.721 "r_mbytes_per_sec": 0, 00:16:06.721 "w_mbytes_per_sec": 0 00:16:06.721 }, 00:16:06.721 "claimed": false, 00:16:06.721 "zoned": false, 00:16:06.721 "supported_io_types": { 00:16:06.721 "read": true, 00:16:06.721 "write": true, 00:16:06.721 "unmap": true, 00:16:06.721 "flush": true, 00:16:06.721 "reset": true, 00:16:06.721 "nvme_admin": false, 00:16:06.721 "nvme_io": false, 00:16:06.721 "nvme_io_md": false, 00:16:06.721 "write_zeroes": true, 00:16:06.721 "zcopy": false, 00:16:06.721 "get_zone_info": false, 00:16:06.721 "zone_management": false, 00:16:06.721 "zone_append": false, 00:16:06.721 "compare": false, 00:16:06.721 "compare_and_write": false, 00:16:06.721 "abort": false, 00:16:06.721 "seek_hole": false, 00:16:06.721 "seek_data": false, 00:16:06.721 "copy": false, 00:16:06.721 "nvme_iov_md": false 00:16:06.721 }, 00:16:06.721 "memory_domains": [ 00:16:06.721 { 00:16:06.721 "dma_device_id": "system", 00:16:06.721 "dma_device_type": 1 00:16:06.721 }, 00:16:06.722 { 00:16:06.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.722 "dma_device_type": 2 00:16:06.722 }, 00:16:06.722 { 00:16:06.722 "dma_device_id": "system", 00:16:06.722 "dma_device_type": 1 00:16:06.722 }, 00:16:06.722 { 00:16:06.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.722 "dma_device_type": 2 00:16:06.722 }, 00:16:06.722 { 00:16:06.722 "dma_device_id": "system", 00:16:06.722 "dma_device_type": 1 00:16:06.722 }, 00:16:06.722 { 00:16:06.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.722 "dma_device_type": 2 00:16:06.722 }, 00:16:06.722 { 00:16:06.722 "dma_device_id": "system", 00:16:06.722 "dma_device_type": 1 00:16:06.722 }, 00:16:06.722 { 00:16:06.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.722 "dma_device_type": 2 00:16:06.722 } 00:16:06.722 ], 00:16:06.722 "driver_specific": { 00:16:06.722 "raid": { 00:16:06.722 "uuid": "cf8350ac-c971-4daa-b307-647748917c3f", 00:16:06.722 "strip_size_kb": 64, 00:16:06.722 "state": "online", 00:16:06.722 "raid_level": "concat", 00:16:06.722 "superblock": true, 00:16:06.722 "num_base_bdevs": 4, 00:16:06.722 "num_base_bdevs_discovered": 4, 00:16:06.722 "num_base_bdevs_operational": 4, 00:16:06.722 "base_bdevs_list": [ 00:16:06.722 { 00:16:06.722 "name": "BaseBdev1", 00:16:06.722 "uuid": "945164b1-c27d-4e26-9f0f-72fe440409ac", 00:16:06.722 "is_configured": true, 00:16:06.722 "data_offset": 2048, 00:16:06.722 "data_size": 63488 00:16:06.722 }, 00:16:06.722 { 00:16:06.722 "name": "BaseBdev2", 00:16:06.722 "uuid": "5d74d11d-2f5d-49a5-aa20-3962990b9c98", 00:16:06.722 "is_configured": true, 00:16:06.722 "data_offset": 2048, 00:16:06.722 "data_size": 63488 00:16:06.722 }, 00:16:06.722 { 00:16:06.722 "name": "BaseBdev3", 00:16:06.722 "uuid": "3c07cb8a-e78e-430c-9087-d12842872be4", 00:16:06.722 "is_configured": true, 00:16:06.722 "data_offset": 2048, 00:16:06.722 "data_size": 63488 00:16:06.722 }, 00:16:06.722 { 00:16:06.722 "name": "BaseBdev4", 00:16:06.722 "uuid": "6183a513-4924-4216-8acf-d7aeb81fbc46", 00:16:06.722 "is_configured": true, 00:16:06.722 "data_offset": 2048, 00:16:06.722 "data_size": 63488 00:16:06.722 } 00:16:06.722 ] 00:16:06.722 } 00:16:06.722 } 00:16:06.722 }' 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:06.722 BaseBdev2 00:16:06.722 BaseBdev3 00:16:06.722 BaseBdev4' 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.722 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.980 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.980 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.980 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.980 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:06.980 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 [2024-11-27 04:37:54.468389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:06.981 [2024-11-27 04:37:54.468589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.981 [2024-11-27 04:37:54.468767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.239 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.239 "name": "Existed_Raid", 00:16:07.239 "uuid": "cf8350ac-c971-4daa-b307-647748917c3f", 00:16:07.239 "strip_size_kb": 64, 00:16:07.239 "state": "offline", 00:16:07.239 "raid_level": "concat", 00:16:07.239 "superblock": true, 00:16:07.239 "num_base_bdevs": 4, 00:16:07.239 "num_base_bdevs_discovered": 3, 00:16:07.239 "num_base_bdevs_operational": 3, 00:16:07.239 "base_bdevs_list": [ 00:16:07.239 { 00:16:07.239 "name": null, 00:16:07.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.239 "is_configured": false, 00:16:07.239 "data_offset": 0, 00:16:07.239 "data_size": 63488 00:16:07.239 }, 00:16:07.239 { 00:16:07.239 "name": "BaseBdev2", 00:16:07.239 "uuid": "5d74d11d-2f5d-49a5-aa20-3962990b9c98", 00:16:07.239 "is_configured": true, 00:16:07.239 "data_offset": 2048, 00:16:07.239 "data_size": 63488 00:16:07.239 }, 00:16:07.239 { 00:16:07.239 "name": "BaseBdev3", 00:16:07.239 "uuid": "3c07cb8a-e78e-430c-9087-d12842872be4", 00:16:07.239 "is_configured": true, 00:16:07.239 "data_offset": 2048, 00:16:07.239 "data_size": 63488 00:16:07.239 }, 00:16:07.239 { 00:16:07.239 "name": "BaseBdev4", 00:16:07.239 "uuid": "6183a513-4924-4216-8acf-d7aeb81fbc46", 00:16:07.239 "is_configured": true, 00:16:07.239 "data_offset": 2048, 00:16:07.239 "data_size": 63488 00:16:07.239 } 00:16:07.239 ] 00:16:07.239 }' 00:16:07.239 04:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.239 04:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.498 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:07.498 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:07.498 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.498 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:07.498 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.498 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.498 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.756 [2024-11-27 04:37:55.145905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.756 [2024-11-27 04:37:55.290024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:07.756 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 [2024-11-27 04:37:55.434269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:08.015 [2024-11-27 04:37:55.434481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 BaseBdev2 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.015 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 [ 00:16:08.275 { 00:16:08.275 "name": "BaseBdev2", 00:16:08.275 "aliases": [ 00:16:08.275 "11647233-a8f7-4426-9573-e58ce310525f" 00:16:08.275 ], 00:16:08.275 "product_name": "Malloc disk", 00:16:08.275 "block_size": 512, 00:16:08.275 "num_blocks": 65536, 00:16:08.275 "uuid": "11647233-a8f7-4426-9573-e58ce310525f", 00:16:08.275 "assigned_rate_limits": { 00:16:08.275 "rw_ios_per_sec": 0, 00:16:08.275 "rw_mbytes_per_sec": 0, 00:16:08.275 "r_mbytes_per_sec": 0, 00:16:08.275 "w_mbytes_per_sec": 0 00:16:08.275 }, 00:16:08.275 "claimed": false, 00:16:08.275 "zoned": false, 00:16:08.275 "supported_io_types": { 00:16:08.275 "read": true, 00:16:08.275 "write": true, 00:16:08.275 "unmap": true, 00:16:08.275 "flush": true, 00:16:08.275 "reset": true, 00:16:08.275 "nvme_admin": false, 00:16:08.275 "nvme_io": false, 00:16:08.275 "nvme_io_md": false, 00:16:08.275 "write_zeroes": true, 00:16:08.275 "zcopy": true, 00:16:08.275 "get_zone_info": false, 00:16:08.275 "zone_management": false, 00:16:08.275 "zone_append": false, 00:16:08.275 "compare": false, 00:16:08.275 "compare_and_write": false, 00:16:08.275 "abort": true, 00:16:08.275 "seek_hole": false, 00:16:08.275 "seek_data": false, 00:16:08.275 "copy": true, 00:16:08.275 "nvme_iov_md": false 00:16:08.275 }, 00:16:08.275 "memory_domains": [ 00:16:08.275 { 00:16:08.275 "dma_device_id": "system", 00:16:08.275 "dma_device_type": 1 00:16:08.275 }, 00:16:08.275 { 00:16:08.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.275 "dma_device_type": 2 00:16:08.275 } 00:16:08.275 ], 00:16:08.275 "driver_specific": {} 00:16:08.275 } 00:16:08.275 ] 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 BaseBdev3 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 [ 00:16:08.275 { 00:16:08.275 "name": "BaseBdev3", 00:16:08.275 "aliases": [ 00:16:08.275 "69425c35-11be-47eb-9b4a-098588995043" 00:16:08.275 ], 00:16:08.275 "product_name": "Malloc disk", 00:16:08.275 "block_size": 512, 00:16:08.275 "num_blocks": 65536, 00:16:08.275 "uuid": "69425c35-11be-47eb-9b4a-098588995043", 00:16:08.275 "assigned_rate_limits": { 00:16:08.275 "rw_ios_per_sec": 0, 00:16:08.275 "rw_mbytes_per_sec": 0, 00:16:08.275 "r_mbytes_per_sec": 0, 00:16:08.275 "w_mbytes_per_sec": 0 00:16:08.275 }, 00:16:08.275 "claimed": false, 00:16:08.275 "zoned": false, 00:16:08.275 "supported_io_types": { 00:16:08.275 "read": true, 00:16:08.275 "write": true, 00:16:08.275 "unmap": true, 00:16:08.275 "flush": true, 00:16:08.275 "reset": true, 00:16:08.275 "nvme_admin": false, 00:16:08.275 "nvme_io": false, 00:16:08.275 "nvme_io_md": false, 00:16:08.275 "write_zeroes": true, 00:16:08.275 "zcopy": true, 00:16:08.275 "get_zone_info": false, 00:16:08.275 "zone_management": false, 00:16:08.275 "zone_append": false, 00:16:08.275 "compare": false, 00:16:08.275 "compare_and_write": false, 00:16:08.275 "abort": true, 00:16:08.275 "seek_hole": false, 00:16:08.275 "seek_data": false, 00:16:08.275 "copy": true, 00:16:08.275 "nvme_iov_md": false 00:16:08.275 }, 00:16:08.275 "memory_domains": [ 00:16:08.275 { 00:16:08.275 "dma_device_id": "system", 00:16:08.275 "dma_device_type": 1 00:16:08.275 }, 00:16:08.275 { 00:16:08.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.275 "dma_device_type": 2 00:16:08.275 } 00:16:08.275 ], 00:16:08.275 "driver_specific": {} 00:16:08.275 } 00:16:08.275 ] 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 BaseBdev4 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.275 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 [ 00:16:08.275 { 00:16:08.275 "name": "BaseBdev4", 00:16:08.275 "aliases": [ 00:16:08.275 "ea43ac89-8178-478f-a7ac-b3e6ac846f1c" 00:16:08.275 ], 00:16:08.275 "product_name": "Malloc disk", 00:16:08.275 "block_size": 512, 00:16:08.275 "num_blocks": 65536, 00:16:08.275 "uuid": "ea43ac89-8178-478f-a7ac-b3e6ac846f1c", 00:16:08.275 "assigned_rate_limits": { 00:16:08.275 "rw_ios_per_sec": 0, 00:16:08.275 "rw_mbytes_per_sec": 0, 00:16:08.275 "r_mbytes_per_sec": 0, 00:16:08.275 "w_mbytes_per_sec": 0 00:16:08.275 }, 00:16:08.275 "claimed": false, 00:16:08.275 "zoned": false, 00:16:08.275 "supported_io_types": { 00:16:08.275 "read": true, 00:16:08.275 "write": true, 00:16:08.275 "unmap": true, 00:16:08.275 "flush": true, 00:16:08.275 "reset": true, 00:16:08.275 "nvme_admin": false, 00:16:08.275 "nvme_io": false, 00:16:08.275 "nvme_io_md": false, 00:16:08.275 "write_zeroes": true, 00:16:08.275 "zcopy": true, 00:16:08.275 "get_zone_info": false, 00:16:08.275 "zone_management": false, 00:16:08.275 "zone_append": false, 00:16:08.275 "compare": false, 00:16:08.275 "compare_and_write": false, 00:16:08.275 "abort": true, 00:16:08.275 "seek_hole": false, 00:16:08.275 "seek_data": false, 00:16:08.275 "copy": true, 00:16:08.275 "nvme_iov_md": false 00:16:08.275 }, 00:16:08.275 "memory_domains": [ 00:16:08.276 { 00:16:08.276 "dma_device_id": "system", 00:16:08.276 "dma_device_type": 1 00:16:08.276 }, 00:16:08.276 { 00:16:08.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.276 "dma_device_type": 2 00:16:08.276 } 00:16:08.276 ], 00:16:08.276 "driver_specific": {} 00:16:08.276 } 00:16:08.276 ] 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.276 [2024-11-27 04:37:55.799009] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.276 [2024-11-27 04:37:55.799614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.276 [2024-11-27 04:37:55.799662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.276 [2024-11-27 04:37:55.802039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.276 [2024-11-27 04:37:55.802111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.276 "name": "Existed_Raid", 00:16:08.276 "uuid": "86140eb9-e696-48e1-af62-376b1c37465a", 00:16:08.276 "strip_size_kb": 64, 00:16:08.276 "state": "configuring", 00:16:08.276 "raid_level": "concat", 00:16:08.276 "superblock": true, 00:16:08.276 "num_base_bdevs": 4, 00:16:08.276 "num_base_bdevs_discovered": 3, 00:16:08.276 "num_base_bdevs_operational": 4, 00:16:08.276 "base_bdevs_list": [ 00:16:08.276 { 00:16:08.276 "name": "BaseBdev1", 00:16:08.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.276 "is_configured": false, 00:16:08.276 "data_offset": 0, 00:16:08.276 "data_size": 0 00:16:08.276 }, 00:16:08.276 { 00:16:08.276 "name": "BaseBdev2", 00:16:08.276 "uuid": "11647233-a8f7-4426-9573-e58ce310525f", 00:16:08.276 "is_configured": true, 00:16:08.276 "data_offset": 2048, 00:16:08.276 "data_size": 63488 00:16:08.276 }, 00:16:08.276 { 00:16:08.276 "name": "BaseBdev3", 00:16:08.276 "uuid": "69425c35-11be-47eb-9b4a-098588995043", 00:16:08.276 "is_configured": true, 00:16:08.276 "data_offset": 2048, 00:16:08.276 "data_size": 63488 00:16:08.276 }, 00:16:08.276 { 00:16:08.276 "name": "BaseBdev4", 00:16:08.276 "uuid": "ea43ac89-8178-478f-a7ac-b3e6ac846f1c", 00:16:08.276 "is_configured": true, 00:16:08.276 "data_offset": 2048, 00:16:08.276 "data_size": 63488 00:16:08.276 } 00:16:08.276 ] 00:16:08.276 }' 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.276 04:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.845 [2024-11-27 04:37:56.311207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.845 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.845 "name": "Existed_Raid", 00:16:08.845 "uuid": "86140eb9-e696-48e1-af62-376b1c37465a", 00:16:08.845 "strip_size_kb": 64, 00:16:08.845 "state": "configuring", 00:16:08.845 "raid_level": "concat", 00:16:08.845 "superblock": true, 00:16:08.845 "num_base_bdevs": 4, 00:16:08.845 "num_base_bdevs_discovered": 2, 00:16:08.845 "num_base_bdevs_operational": 4, 00:16:08.845 "base_bdevs_list": [ 00:16:08.845 { 00:16:08.845 "name": "BaseBdev1", 00:16:08.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.845 "is_configured": false, 00:16:08.845 "data_offset": 0, 00:16:08.845 "data_size": 0 00:16:08.846 }, 00:16:08.846 { 00:16:08.846 "name": null, 00:16:08.846 "uuid": "11647233-a8f7-4426-9573-e58ce310525f", 00:16:08.846 "is_configured": false, 00:16:08.846 "data_offset": 0, 00:16:08.846 "data_size": 63488 00:16:08.846 }, 00:16:08.846 { 00:16:08.846 "name": "BaseBdev3", 00:16:08.846 "uuid": "69425c35-11be-47eb-9b4a-098588995043", 00:16:08.846 "is_configured": true, 00:16:08.846 "data_offset": 2048, 00:16:08.846 "data_size": 63488 00:16:08.846 }, 00:16:08.846 { 00:16:08.846 "name": "BaseBdev4", 00:16:08.846 "uuid": "ea43ac89-8178-478f-a7ac-b3e6ac846f1c", 00:16:08.846 "is_configured": true, 00:16:08.846 "data_offset": 2048, 00:16:08.846 "data_size": 63488 00:16:08.846 } 00:16:08.846 ] 00:16:08.846 }' 00:16:08.846 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.846 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.412 [2024-11-27 04:37:56.888795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.412 BaseBdev1 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.412 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.412 [ 00:16:09.412 { 00:16:09.412 "name": "BaseBdev1", 00:16:09.412 "aliases": [ 00:16:09.412 "9356febe-cda1-47b0-9e8d-b5feadf150f4" 00:16:09.412 ], 00:16:09.412 "product_name": "Malloc disk", 00:16:09.412 "block_size": 512, 00:16:09.412 "num_blocks": 65536, 00:16:09.412 "uuid": "9356febe-cda1-47b0-9e8d-b5feadf150f4", 00:16:09.412 "assigned_rate_limits": { 00:16:09.412 "rw_ios_per_sec": 0, 00:16:09.412 "rw_mbytes_per_sec": 0, 00:16:09.412 "r_mbytes_per_sec": 0, 00:16:09.412 "w_mbytes_per_sec": 0 00:16:09.412 }, 00:16:09.412 "claimed": true, 00:16:09.412 "claim_type": "exclusive_write", 00:16:09.413 "zoned": false, 00:16:09.413 "supported_io_types": { 00:16:09.413 "read": true, 00:16:09.413 "write": true, 00:16:09.413 "unmap": true, 00:16:09.413 "flush": true, 00:16:09.413 "reset": true, 00:16:09.413 "nvme_admin": false, 00:16:09.413 "nvme_io": false, 00:16:09.413 "nvme_io_md": false, 00:16:09.413 "write_zeroes": true, 00:16:09.413 "zcopy": true, 00:16:09.413 "get_zone_info": false, 00:16:09.413 "zone_management": false, 00:16:09.413 "zone_append": false, 00:16:09.413 "compare": false, 00:16:09.413 "compare_and_write": false, 00:16:09.413 "abort": true, 00:16:09.413 "seek_hole": false, 00:16:09.413 "seek_data": false, 00:16:09.413 "copy": true, 00:16:09.413 "nvme_iov_md": false 00:16:09.413 }, 00:16:09.413 "memory_domains": [ 00:16:09.413 { 00:16:09.413 "dma_device_id": "system", 00:16:09.413 "dma_device_type": 1 00:16:09.413 }, 00:16:09.413 { 00:16:09.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.413 "dma_device_type": 2 00:16:09.413 } 00:16:09.413 ], 00:16:09.413 "driver_specific": {} 00:16:09.413 } 00:16:09.413 ] 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.413 "name": "Existed_Raid", 00:16:09.413 "uuid": "86140eb9-e696-48e1-af62-376b1c37465a", 00:16:09.413 "strip_size_kb": 64, 00:16:09.413 "state": "configuring", 00:16:09.413 "raid_level": "concat", 00:16:09.413 "superblock": true, 00:16:09.413 "num_base_bdevs": 4, 00:16:09.413 "num_base_bdevs_discovered": 3, 00:16:09.413 "num_base_bdevs_operational": 4, 00:16:09.413 "base_bdevs_list": [ 00:16:09.413 { 00:16:09.413 "name": "BaseBdev1", 00:16:09.413 "uuid": "9356febe-cda1-47b0-9e8d-b5feadf150f4", 00:16:09.413 "is_configured": true, 00:16:09.413 "data_offset": 2048, 00:16:09.413 "data_size": 63488 00:16:09.413 }, 00:16:09.413 { 00:16:09.413 "name": null, 00:16:09.413 "uuid": "11647233-a8f7-4426-9573-e58ce310525f", 00:16:09.413 "is_configured": false, 00:16:09.413 "data_offset": 0, 00:16:09.413 "data_size": 63488 00:16:09.413 }, 00:16:09.413 { 00:16:09.413 "name": "BaseBdev3", 00:16:09.413 "uuid": "69425c35-11be-47eb-9b4a-098588995043", 00:16:09.413 "is_configured": true, 00:16:09.413 "data_offset": 2048, 00:16:09.413 "data_size": 63488 00:16:09.413 }, 00:16:09.413 { 00:16:09.413 "name": "BaseBdev4", 00:16:09.413 "uuid": "ea43ac89-8178-478f-a7ac-b3e6ac846f1c", 00:16:09.413 "is_configured": true, 00:16:09.413 "data_offset": 2048, 00:16:09.413 "data_size": 63488 00:16:09.413 } 00:16:09.413 ] 00:16:09.413 }' 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.413 04:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.980 [2024-11-27 04:37:57.537060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.980 "name": "Existed_Raid", 00:16:09.980 "uuid": "86140eb9-e696-48e1-af62-376b1c37465a", 00:16:09.980 "strip_size_kb": 64, 00:16:09.980 "state": "configuring", 00:16:09.980 "raid_level": "concat", 00:16:09.980 "superblock": true, 00:16:09.980 "num_base_bdevs": 4, 00:16:09.980 "num_base_bdevs_discovered": 2, 00:16:09.980 "num_base_bdevs_operational": 4, 00:16:09.980 "base_bdevs_list": [ 00:16:09.980 { 00:16:09.980 "name": "BaseBdev1", 00:16:09.980 "uuid": "9356febe-cda1-47b0-9e8d-b5feadf150f4", 00:16:09.980 "is_configured": true, 00:16:09.980 "data_offset": 2048, 00:16:09.980 "data_size": 63488 00:16:09.980 }, 00:16:09.980 { 00:16:09.980 "name": null, 00:16:09.980 "uuid": "11647233-a8f7-4426-9573-e58ce310525f", 00:16:09.980 "is_configured": false, 00:16:09.980 "data_offset": 0, 00:16:09.980 "data_size": 63488 00:16:09.980 }, 00:16:09.980 { 00:16:09.980 "name": null, 00:16:09.980 "uuid": "69425c35-11be-47eb-9b4a-098588995043", 00:16:09.980 "is_configured": false, 00:16:09.980 "data_offset": 0, 00:16:09.980 "data_size": 63488 00:16:09.980 }, 00:16:09.980 { 00:16:09.980 "name": "BaseBdev4", 00:16:09.980 "uuid": "ea43ac89-8178-478f-a7ac-b3e6ac846f1c", 00:16:09.980 "is_configured": true, 00:16:09.980 "data_offset": 2048, 00:16:09.980 "data_size": 63488 00:16:09.980 } 00:16:09.980 ] 00:16:09.980 }' 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.980 04:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.547 [2024-11-27 04:37:58.077157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.547 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.547 "name": "Existed_Raid", 00:16:10.547 "uuid": "86140eb9-e696-48e1-af62-376b1c37465a", 00:16:10.547 "strip_size_kb": 64, 00:16:10.547 "state": "configuring", 00:16:10.547 "raid_level": "concat", 00:16:10.547 "superblock": true, 00:16:10.547 "num_base_bdevs": 4, 00:16:10.547 "num_base_bdevs_discovered": 3, 00:16:10.547 "num_base_bdevs_operational": 4, 00:16:10.547 "base_bdevs_list": [ 00:16:10.547 { 00:16:10.547 "name": "BaseBdev1", 00:16:10.547 "uuid": "9356febe-cda1-47b0-9e8d-b5feadf150f4", 00:16:10.548 "is_configured": true, 00:16:10.548 "data_offset": 2048, 00:16:10.548 "data_size": 63488 00:16:10.548 }, 00:16:10.548 { 00:16:10.548 "name": null, 00:16:10.548 "uuid": "11647233-a8f7-4426-9573-e58ce310525f", 00:16:10.548 "is_configured": false, 00:16:10.548 "data_offset": 0, 00:16:10.548 "data_size": 63488 00:16:10.548 }, 00:16:10.548 { 00:16:10.548 "name": "BaseBdev3", 00:16:10.548 "uuid": "69425c35-11be-47eb-9b4a-098588995043", 00:16:10.548 "is_configured": true, 00:16:10.548 "data_offset": 2048, 00:16:10.548 "data_size": 63488 00:16:10.548 }, 00:16:10.548 { 00:16:10.548 "name": "BaseBdev4", 00:16:10.548 "uuid": "ea43ac89-8178-478f-a7ac-b3e6ac846f1c", 00:16:10.548 "is_configured": true, 00:16:10.548 "data_offset": 2048, 00:16:10.548 "data_size": 63488 00:16:10.548 } 00:16:10.548 ] 00:16:10.548 }' 00:16:10.548 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.548 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.115 [2024-11-27 04:37:58.617352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.115 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.374 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.374 "name": "Existed_Raid", 00:16:11.374 "uuid": "86140eb9-e696-48e1-af62-376b1c37465a", 00:16:11.374 "strip_size_kb": 64, 00:16:11.374 "state": "configuring", 00:16:11.374 "raid_level": "concat", 00:16:11.374 "superblock": true, 00:16:11.374 "num_base_bdevs": 4, 00:16:11.374 "num_base_bdevs_discovered": 2, 00:16:11.374 "num_base_bdevs_operational": 4, 00:16:11.374 "base_bdevs_list": [ 00:16:11.374 { 00:16:11.374 "name": null, 00:16:11.374 "uuid": "9356febe-cda1-47b0-9e8d-b5feadf150f4", 00:16:11.374 "is_configured": false, 00:16:11.374 "data_offset": 0, 00:16:11.374 "data_size": 63488 00:16:11.374 }, 00:16:11.374 { 00:16:11.374 "name": null, 00:16:11.374 "uuid": "11647233-a8f7-4426-9573-e58ce310525f", 00:16:11.374 "is_configured": false, 00:16:11.374 "data_offset": 0, 00:16:11.374 "data_size": 63488 00:16:11.374 }, 00:16:11.374 { 00:16:11.374 "name": "BaseBdev3", 00:16:11.374 "uuid": "69425c35-11be-47eb-9b4a-098588995043", 00:16:11.374 "is_configured": true, 00:16:11.374 "data_offset": 2048, 00:16:11.374 "data_size": 63488 00:16:11.374 }, 00:16:11.374 { 00:16:11.374 "name": "BaseBdev4", 00:16:11.374 "uuid": "ea43ac89-8178-478f-a7ac-b3e6ac846f1c", 00:16:11.374 "is_configured": true, 00:16:11.374 "data_offset": 2048, 00:16:11.374 "data_size": 63488 00:16:11.374 } 00:16:11.374 ] 00:16:11.374 }' 00:16:11.375 04:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.375 04:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.634 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:11.634 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.634 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.634 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.634 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.635 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:11.635 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:11.635 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.635 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.635 [2024-11-27 04:37:59.249519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.918 "name": "Existed_Raid", 00:16:11.918 "uuid": "86140eb9-e696-48e1-af62-376b1c37465a", 00:16:11.918 "strip_size_kb": 64, 00:16:11.918 "state": "configuring", 00:16:11.918 "raid_level": "concat", 00:16:11.918 "superblock": true, 00:16:11.918 "num_base_bdevs": 4, 00:16:11.918 "num_base_bdevs_discovered": 3, 00:16:11.918 "num_base_bdevs_operational": 4, 00:16:11.918 "base_bdevs_list": [ 00:16:11.918 { 00:16:11.918 "name": null, 00:16:11.918 "uuid": "9356febe-cda1-47b0-9e8d-b5feadf150f4", 00:16:11.918 "is_configured": false, 00:16:11.918 "data_offset": 0, 00:16:11.918 "data_size": 63488 00:16:11.918 }, 00:16:11.918 { 00:16:11.918 "name": "BaseBdev2", 00:16:11.918 "uuid": "11647233-a8f7-4426-9573-e58ce310525f", 00:16:11.918 "is_configured": true, 00:16:11.918 "data_offset": 2048, 00:16:11.918 "data_size": 63488 00:16:11.918 }, 00:16:11.918 { 00:16:11.918 "name": "BaseBdev3", 00:16:11.918 "uuid": "69425c35-11be-47eb-9b4a-098588995043", 00:16:11.918 "is_configured": true, 00:16:11.918 "data_offset": 2048, 00:16:11.918 "data_size": 63488 00:16:11.918 }, 00:16:11.918 { 00:16:11.918 "name": "BaseBdev4", 00:16:11.918 "uuid": "ea43ac89-8178-478f-a7ac-b3e6ac846f1c", 00:16:11.918 "is_configured": true, 00:16:11.918 "data_offset": 2048, 00:16:11.918 "data_size": 63488 00:16:11.918 } 00:16:11.918 ] 00:16:11.918 }' 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.918 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.177 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.177 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:12.177 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.177 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.177 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9356febe-cda1-47b0-9e8d-b5feadf150f4 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.438 [2024-11-27 04:37:59.919833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:12.438 [2024-11-27 04:37:59.920145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:12.438 [2024-11-27 04:37:59.920163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:12.438 [2024-11-27 04:37:59.920480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:12.438 [2024-11-27 04:37:59.920653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:12.438 [2024-11-27 04:37:59.920674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:12.438 [2024-11-27 04:37:59.920855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.438 NewBaseBdev 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.438 [ 00:16:12.438 { 00:16:12.438 "name": "NewBaseBdev", 00:16:12.438 "aliases": [ 00:16:12.438 "9356febe-cda1-47b0-9e8d-b5feadf150f4" 00:16:12.438 ], 00:16:12.438 "product_name": "Malloc disk", 00:16:12.438 "block_size": 512, 00:16:12.438 "num_blocks": 65536, 00:16:12.438 "uuid": "9356febe-cda1-47b0-9e8d-b5feadf150f4", 00:16:12.438 "assigned_rate_limits": { 00:16:12.438 "rw_ios_per_sec": 0, 00:16:12.438 "rw_mbytes_per_sec": 0, 00:16:12.438 "r_mbytes_per_sec": 0, 00:16:12.438 "w_mbytes_per_sec": 0 00:16:12.438 }, 00:16:12.438 "claimed": true, 00:16:12.438 "claim_type": "exclusive_write", 00:16:12.438 "zoned": false, 00:16:12.438 "supported_io_types": { 00:16:12.438 "read": true, 00:16:12.438 "write": true, 00:16:12.438 "unmap": true, 00:16:12.438 "flush": true, 00:16:12.438 "reset": true, 00:16:12.438 "nvme_admin": false, 00:16:12.438 "nvme_io": false, 00:16:12.438 "nvme_io_md": false, 00:16:12.438 "write_zeroes": true, 00:16:12.438 "zcopy": true, 00:16:12.438 "get_zone_info": false, 00:16:12.438 "zone_management": false, 00:16:12.438 "zone_append": false, 00:16:12.438 "compare": false, 00:16:12.438 "compare_and_write": false, 00:16:12.438 "abort": true, 00:16:12.438 "seek_hole": false, 00:16:12.438 "seek_data": false, 00:16:12.438 "copy": true, 00:16:12.438 "nvme_iov_md": false 00:16:12.438 }, 00:16:12.438 "memory_domains": [ 00:16:12.438 { 00:16:12.438 "dma_device_id": "system", 00:16:12.438 "dma_device_type": 1 00:16:12.438 }, 00:16:12.438 { 00:16:12.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.438 "dma_device_type": 2 00:16:12.438 } 00:16:12.438 ], 00:16:12.438 "driver_specific": {} 00:16:12.438 } 00:16:12.438 ] 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.438 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.439 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.439 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.439 04:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.439 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.439 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.439 04:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.439 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.439 "name": "Existed_Raid", 00:16:12.439 "uuid": "86140eb9-e696-48e1-af62-376b1c37465a", 00:16:12.439 "strip_size_kb": 64, 00:16:12.439 "state": "online", 00:16:12.439 "raid_level": "concat", 00:16:12.439 "superblock": true, 00:16:12.439 "num_base_bdevs": 4, 00:16:12.439 "num_base_bdevs_discovered": 4, 00:16:12.439 "num_base_bdevs_operational": 4, 00:16:12.439 "base_bdevs_list": [ 00:16:12.439 { 00:16:12.439 "name": "NewBaseBdev", 00:16:12.439 "uuid": "9356febe-cda1-47b0-9e8d-b5feadf150f4", 00:16:12.439 "is_configured": true, 00:16:12.439 "data_offset": 2048, 00:16:12.439 "data_size": 63488 00:16:12.439 }, 00:16:12.439 { 00:16:12.439 "name": "BaseBdev2", 00:16:12.439 "uuid": "11647233-a8f7-4426-9573-e58ce310525f", 00:16:12.439 "is_configured": true, 00:16:12.439 "data_offset": 2048, 00:16:12.439 "data_size": 63488 00:16:12.439 }, 00:16:12.439 { 00:16:12.439 "name": "BaseBdev3", 00:16:12.439 "uuid": "69425c35-11be-47eb-9b4a-098588995043", 00:16:12.439 "is_configured": true, 00:16:12.439 "data_offset": 2048, 00:16:12.439 "data_size": 63488 00:16:12.439 }, 00:16:12.439 { 00:16:12.439 "name": "BaseBdev4", 00:16:12.439 "uuid": "ea43ac89-8178-478f-a7ac-b3e6ac846f1c", 00:16:12.439 "is_configured": true, 00:16:12.439 "data_offset": 2048, 00:16:12.439 "data_size": 63488 00:16:12.439 } 00:16:12.439 ] 00:16:12.439 }' 00:16:12.439 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.439 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.005 [2024-11-27 04:38:00.452536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.005 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:13.005 "name": "Existed_Raid", 00:16:13.005 "aliases": [ 00:16:13.005 "86140eb9-e696-48e1-af62-376b1c37465a" 00:16:13.005 ], 00:16:13.005 "product_name": "Raid Volume", 00:16:13.005 "block_size": 512, 00:16:13.005 "num_blocks": 253952, 00:16:13.005 "uuid": "86140eb9-e696-48e1-af62-376b1c37465a", 00:16:13.005 "assigned_rate_limits": { 00:16:13.005 "rw_ios_per_sec": 0, 00:16:13.005 "rw_mbytes_per_sec": 0, 00:16:13.005 "r_mbytes_per_sec": 0, 00:16:13.005 "w_mbytes_per_sec": 0 00:16:13.005 }, 00:16:13.005 "claimed": false, 00:16:13.005 "zoned": false, 00:16:13.005 "supported_io_types": { 00:16:13.005 "read": true, 00:16:13.005 "write": true, 00:16:13.005 "unmap": true, 00:16:13.005 "flush": true, 00:16:13.005 "reset": true, 00:16:13.005 "nvme_admin": false, 00:16:13.006 "nvme_io": false, 00:16:13.006 "nvme_io_md": false, 00:16:13.006 "write_zeroes": true, 00:16:13.006 "zcopy": false, 00:16:13.006 "get_zone_info": false, 00:16:13.006 "zone_management": false, 00:16:13.006 "zone_append": false, 00:16:13.006 "compare": false, 00:16:13.006 "compare_and_write": false, 00:16:13.006 "abort": false, 00:16:13.006 "seek_hole": false, 00:16:13.006 "seek_data": false, 00:16:13.006 "copy": false, 00:16:13.006 "nvme_iov_md": false 00:16:13.006 }, 00:16:13.006 "memory_domains": [ 00:16:13.006 { 00:16:13.006 "dma_device_id": "system", 00:16:13.006 "dma_device_type": 1 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.006 "dma_device_type": 2 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "dma_device_id": "system", 00:16:13.006 "dma_device_type": 1 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.006 "dma_device_type": 2 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "dma_device_id": "system", 00:16:13.006 "dma_device_type": 1 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.006 "dma_device_type": 2 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "dma_device_id": "system", 00:16:13.006 "dma_device_type": 1 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.006 "dma_device_type": 2 00:16:13.006 } 00:16:13.006 ], 00:16:13.006 "driver_specific": { 00:16:13.006 "raid": { 00:16:13.006 "uuid": "86140eb9-e696-48e1-af62-376b1c37465a", 00:16:13.006 "strip_size_kb": 64, 00:16:13.006 "state": "online", 00:16:13.006 "raid_level": "concat", 00:16:13.006 "superblock": true, 00:16:13.006 "num_base_bdevs": 4, 00:16:13.006 "num_base_bdevs_discovered": 4, 00:16:13.006 "num_base_bdevs_operational": 4, 00:16:13.006 "base_bdevs_list": [ 00:16:13.006 { 00:16:13.006 "name": "NewBaseBdev", 00:16:13.006 "uuid": "9356febe-cda1-47b0-9e8d-b5feadf150f4", 00:16:13.006 "is_configured": true, 00:16:13.006 "data_offset": 2048, 00:16:13.006 "data_size": 63488 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "name": "BaseBdev2", 00:16:13.006 "uuid": "11647233-a8f7-4426-9573-e58ce310525f", 00:16:13.006 "is_configured": true, 00:16:13.006 "data_offset": 2048, 00:16:13.006 "data_size": 63488 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "name": "BaseBdev3", 00:16:13.006 "uuid": "69425c35-11be-47eb-9b4a-098588995043", 00:16:13.006 "is_configured": true, 00:16:13.006 "data_offset": 2048, 00:16:13.006 "data_size": 63488 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "name": "BaseBdev4", 00:16:13.006 "uuid": "ea43ac89-8178-478f-a7ac-b3e6ac846f1c", 00:16:13.006 "is_configured": true, 00:16:13.006 "data_offset": 2048, 00:16:13.006 "data_size": 63488 00:16:13.006 } 00:16:13.006 ] 00:16:13.006 } 00:16:13.006 } 00:16:13.006 }' 00:16:13.006 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:13.006 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:13.006 BaseBdev2 00:16:13.006 BaseBdev3 00:16:13.006 BaseBdev4' 00:16:13.006 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.006 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:13.006 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.006 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:13.006 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.006 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.006 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.006 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.265 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.265 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.265 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.265 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.266 [2024-11-27 04:38:00.796134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.266 [2024-11-27 04:38:00.796188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.266 [2024-11-27 04:38:00.796277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.266 [2024-11-27 04:38:00.796367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.266 [2024-11-27 04:38:00.796384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72194 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72194 ']' 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72194 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72194 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72194' 00:16:13.266 killing process with pid 72194 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72194 00:16:13.266 [2024-11-27 04:38:00.837060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.266 04:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72194 00:16:13.831 [2024-11-27 04:38:01.186298] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.767 04:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:14.767 00:16:14.767 real 0m12.832s 00:16:14.767 user 0m21.257s 00:16:14.767 sys 0m1.816s 00:16:14.767 04:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.767 04:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.767 ************************************ 00:16:14.767 END TEST raid_state_function_test_sb 00:16:14.767 ************************************ 00:16:14.767 04:38:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:14.767 04:38:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:14.767 04:38:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.767 04:38:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.767 ************************************ 00:16:14.767 START TEST raid_superblock_test 00:16:14.767 ************************************ 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:14.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72870 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72870 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72870 ']' 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.767 04:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.026 [2024-11-27 04:38:02.405305] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:16:15.026 [2024-11-27 04:38:02.405479] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72870 ] 00:16:15.026 [2024-11-27 04:38:02.594182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.331 [2024-11-27 04:38:02.748502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.590 [2024-11-27 04:38:02.956088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.590 [2024-11-27 04:38:02.956164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.848 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.848 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:15.848 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.849 malloc1 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.849 [2024-11-27 04:38:03.448865] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:15.849 [2024-11-27 04:38:03.449070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.849 [2024-11-27 04:38:03.449230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:15.849 [2024-11-27 04:38:03.449370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.849 [2024-11-27 04:38:03.452295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.849 [2024-11-27 04:38:03.452453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:15.849 pt1 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.849 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.109 malloc2 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.109 [2024-11-27 04:38:03.501692] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:16.109 [2024-11-27 04:38:03.501919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.109 [2024-11-27 04:38:03.502088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:16.109 [2024-11-27 04:38:03.502222] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.109 [2024-11-27 04:38:03.505039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.109 [2024-11-27 04:38:03.505191] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:16.109 pt2 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.109 malloc3 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.109 [2024-11-27 04:38:03.572715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:16.109 [2024-11-27 04:38:03.572917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.109 [2024-11-27 04:38:03.573056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:16.109 [2024-11-27 04:38:03.573177] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.109 [2024-11-27 04:38:03.576088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.109 [2024-11-27 04:38:03.576262] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:16.109 pt3 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.109 malloc4 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.109 [2024-11-27 04:38:03.625058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:16.109 [2024-11-27 04:38:03.625257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.109 [2024-11-27 04:38:03.625413] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:16.109 [2024-11-27 04:38:03.625545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.109 [2024-11-27 04:38:03.628390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.109 [2024-11-27 04:38:03.628541] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:16.109 pt4 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.109 [2024-11-27 04:38:03.633275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.109 [2024-11-27 04:38:03.635743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.109 [2024-11-27 04:38:03.635885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:16.109 [2024-11-27 04:38:03.635961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:16.109 [2024-11-27 04:38:03.636202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:16.109 [2024-11-27 04:38:03.636220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:16.109 [2024-11-27 04:38:03.636537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:16.109 [2024-11-27 04:38:03.636753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:16.109 [2024-11-27 04:38:03.636793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:16.109 [2024-11-27 04:38:03.636977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.109 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.109 "name": "raid_bdev1", 00:16:16.109 "uuid": "6570624f-c69e-4d20-9723-2d85acdb5046", 00:16:16.109 "strip_size_kb": 64, 00:16:16.109 "state": "online", 00:16:16.109 "raid_level": "concat", 00:16:16.109 "superblock": true, 00:16:16.109 "num_base_bdevs": 4, 00:16:16.109 "num_base_bdevs_discovered": 4, 00:16:16.110 "num_base_bdevs_operational": 4, 00:16:16.110 "base_bdevs_list": [ 00:16:16.110 { 00:16:16.110 "name": "pt1", 00:16:16.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.110 "is_configured": true, 00:16:16.110 "data_offset": 2048, 00:16:16.110 "data_size": 63488 00:16:16.110 }, 00:16:16.110 { 00:16:16.110 "name": "pt2", 00:16:16.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.110 "is_configured": true, 00:16:16.110 "data_offset": 2048, 00:16:16.110 "data_size": 63488 00:16:16.110 }, 00:16:16.110 { 00:16:16.110 "name": "pt3", 00:16:16.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.110 "is_configured": true, 00:16:16.110 "data_offset": 2048, 00:16:16.110 "data_size": 63488 00:16:16.110 }, 00:16:16.110 { 00:16:16.110 "name": "pt4", 00:16:16.110 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:16.110 "is_configured": true, 00:16:16.110 "data_offset": 2048, 00:16:16.110 "data_size": 63488 00:16:16.110 } 00:16:16.110 ] 00:16:16.110 }' 00:16:16.110 04:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.110 04:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.676 [2024-11-27 04:38:04.121818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.676 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:16.676 "name": "raid_bdev1", 00:16:16.676 "aliases": [ 00:16:16.676 "6570624f-c69e-4d20-9723-2d85acdb5046" 00:16:16.676 ], 00:16:16.676 "product_name": "Raid Volume", 00:16:16.676 "block_size": 512, 00:16:16.676 "num_blocks": 253952, 00:16:16.676 "uuid": "6570624f-c69e-4d20-9723-2d85acdb5046", 00:16:16.676 "assigned_rate_limits": { 00:16:16.676 "rw_ios_per_sec": 0, 00:16:16.676 "rw_mbytes_per_sec": 0, 00:16:16.676 "r_mbytes_per_sec": 0, 00:16:16.676 "w_mbytes_per_sec": 0 00:16:16.676 }, 00:16:16.676 "claimed": false, 00:16:16.676 "zoned": false, 00:16:16.676 "supported_io_types": { 00:16:16.676 "read": true, 00:16:16.676 "write": true, 00:16:16.676 "unmap": true, 00:16:16.676 "flush": true, 00:16:16.676 "reset": true, 00:16:16.676 "nvme_admin": false, 00:16:16.676 "nvme_io": false, 00:16:16.676 "nvme_io_md": false, 00:16:16.676 "write_zeroes": true, 00:16:16.676 "zcopy": false, 00:16:16.676 "get_zone_info": false, 00:16:16.676 "zone_management": false, 00:16:16.676 "zone_append": false, 00:16:16.676 "compare": false, 00:16:16.676 "compare_and_write": false, 00:16:16.676 "abort": false, 00:16:16.676 "seek_hole": false, 00:16:16.676 "seek_data": false, 00:16:16.676 "copy": false, 00:16:16.676 "nvme_iov_md": false 00:16:16.676 }, 00:16:16.676 "memory_domains": [ 00:16:16.676 { 00:16:16.676 "dma_device_id": "system", 00:16:16.676 "dma_device_type": 1 00:16:16.676 }, 00:16:16.676 { 00:16:16.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.676 "dma_device_type": 2 00:16:16.676 }, 00:16:16.676 { 00:16:16.676 "dma_device_id": "system", 00:16:16.676 "dma_device_type": 1 00:16:16.676 }, 00:16:16.676 { 00:16:16.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.676 "dma_device_type": 2 00:16:16.676 }, 00:16:16.676 { 00:16:16.676 "dma_device_id": "system", 00:16:16.676 "dma_device_type": 1 00:16:16.676 }, 00:16:16.676 { 00:16:16.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.676 "dma_device_type": 2 00:16:16.676 }, 00:16:16.676 { 00:16:16.676 "dma_device_id": "system", 00:16:16.676 "dma_device_type": 1 00:16:16.676 }, 00:16:16.677 { 00:16:16.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.677 "dma_device_type": 2 00:16:16.677 } 00:16:16.677 ], 00:16:16.677 "driver_specific": { 00:16:16.677 "raid": { 00:16:16.677 "uuid": "6570624f-c69e-4d20-9723-2d85acdb5046", 00:16:16.677 "strip_size_kb": 64, 00:16:16.677 "state": "online", 00:16:16.677 "raid_level": "concat", 00:16:16.677 "superblock": true, 00:16:16.677 "num_base_bdevs": 4, 00:16:16.677 "num_base_bdevs_discovered": 4, 00:16:16.677 "num_base_bdevs_operational": 4, 00:16:16.677 "base_bdevs_list": [ 00:16:16.677 { 00:16:16.677 "name": "pt1", 00:16:16.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.677 "is_configured": true, 00:16:16.677 "data_offset": 2048, 00:16:16.677 "data_size": 63488 00:16:16.677 }, 00:16:16.677 { 00:16:16.677 "name": "pt2", 00:16:16.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.677 "is_configured": true, 00:16:16.677 "data_offset": 2048, 00:16:16.677 "data_size": 63488 00:16:16.677 }, 00:16:16.677 { 00:16:16.677 "name": "pt3", 00:16:16.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.677 "is_configured": true, 00:16:16.677 "data_offset": 2048, 00:16:16.677 "data_size": 63488 00:16:16.677 }, 00:16:16.677 { 00:16:16.677 "name": "pt4", 00:16:16.677 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:16.677 "is_configured": true, 00:16:16.677 "data_offset": 2048, 00:16:16.677 "data_size": 63488 00:16:16.677 } 00:16:16.677 ] 00:16:16.677 } 00:16:16.677 } 00:16:16.677 }' 00:16:16.677 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.677 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:16.677 pt2 00:16:16.677 pt3 00:16:16.677 pt4' 00:16:16.677 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.677 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:16.677 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.677 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.677 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:16.677 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.677 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.936 [2024-11-27 04:38:04.501799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6570624f-c69e-4d20-9723-2d85acdb5046 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6570624f-c69e-4d20-9723-2d85acdb5046 ']' 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.936 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.196 [2024-11-27 04:38:04.561472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.196 [2024-11-27 04:38:04.561610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.196 [2024-11-27 04:38:04.561843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.196 [2024-11-27 04:38:04.562056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.196 [2024-11-27 04:38:04.562189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:17.196 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.197 [2024-11-27 04:38:04.709549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:17.197 [2024-11-27 04:38:04.712148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:17.197 [2024-11-27 04:38:04.712246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:17.197 [2024-11-27 04:38:04.712301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:17.197 [2024-11-27 04:38:04.712373] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:17.197 [2024-11-27 04:38:04.712444] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:17.197 [2024-11-27 04:38:04.712478] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:17.197 [2024-11-27 04:38:04.712509] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:17.197 [2024-11-27 04:38:04.712531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.197 [2024-11-27 04:38:04.712547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:17.197 request: 00:16:17.197 { 00:16:17.197 "name": "raid_bdev1", 00:16:17.197 "raid_level": "concat", 00:16:17.197 "base_bdevs": [ 00:16:17.197 "malloc1", 00:16:17.197 "malloc2", 00:16:17.197 "malloc3", 00:16:17.197 "malloc4" 00:16:17.197 ], 00:16:17.197 "strip_size_kb": 64, 00:16:17.197 "superblock": false, 00:16:17.197 "method": "bdev_raid_create", 00:16:17.197 "req_id": 1 00:16:17.197 } 00:16:17.197 Got JSON-RPC error response 00:16:17.197 response: 00:16:17.197 { 00:16:17.197 "code": -17, 00:16:17.197 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:17.197 } 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.197 [2024-11-27 04:38:04.785528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:17.197 [2024-11-27 04:38:04.785705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.197 [2024-11-27 04:38:04.785847] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:17.197 [2024-11-27 04:38:04.785990] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.197 [2024-11-27 04:38:04.788880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.197 [2024-11-27 04:38:04.789045] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:17.197 [2024-11-27 04:38:04.789237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:17.197 [2024-11-27 04:38:04.789417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:17.197 pt1 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.197 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.456 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.456 "name": "raid_bdev1", 00:16:17.456 "uuid": "6570624f-c69e-4d20-9723-2d85acdb5046", 00:16:17.456 "strip_size_kb": 64, 00:16:17.456 "state": "configuring", 00:16:17.456 "raid_level": "concat", 00:16:17.456 "superblock": true, 00:16:17.456 "num_base_bdevs": 4, 00:16:17.456 "num_base_bdevs_discovered": 1, 00:16:17.456 "num_base_bdevs_operational": 4, 00:16:17.456 "base_bdevs_list": [ 00:16:17.456 { 00:16:17.456 "name": "pt1", 00:16:17.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.456 "is_configured": true, 00:16:17.456 "data_offset": 2048, 00:16:17.456 "data_size": 63488 00:16:17.456 }, 00:16:17.456 { 00:16:17.456 "name": null, 00:16:17.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.456 "is_configured": false, 00:16:17.456 "data_offset": 2048, 00:16:17.456 "data_size": 63488 00:16:17.456 }, 00:16:17.456 { 00:16:17.456 "name": null, 00:16:17.456 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.456 "is_configured": false, 00:16:17.456 "data_offset": 2048, 00:16:17.456 "data_size": 63488 00:16:17.456 }, 00:16:17.456 { 00:16:17.456 "name": null, 00:16:17.456 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.456 "is_configured": false, 00:16:17.456 "data_offset": 2048, 00:16:17.456 "data_size": 63488 00:16:17.456 } 00:16:17.456 ] 00:16:17.456 }' 00:16:17.456 04:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.456 04:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.714 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:17.714 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.714 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.714 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.714 [2024-11-27 04:38:05.325919] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.714 [2024-11-27 04:38:05.326145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.714 [2024-11-27 04:38:05.326184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:17.714 [2024-11-27 04:38:05.326214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.714 [2024-11-27 04:38:05.326762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.714 [2024-11-27 04:38:05.326827] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.714 [2024-11-27 04:38:05.326932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:17.714 [2024-11-27 04:38:05.326969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.714 pt2 00:16:17.714 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.714 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:17.714 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.714 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.714 [2024-11-27 04:38:05.333907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.973 "name": "raid_bdev1", 00:16:17.973 "uuid": "6570624f-c69e-4d20-9723-2d85acdb5046", 00:16:17.973 "strip_size_kb": 64, 00:16:17.973 "state": "configuring", 00:16:17.973 "raid_level": "concat", 00:16:17.973 "superblock": true, 00:16:17.973 "num_base_bdevs": 4, 00:16:17.973 "num_base_bdevs_discovered": 1, 00:16:17.973 "num_base_bdevs_operational": 4, 00:16:17.973 "base_bdevs_list": [ 00:16:17.973 { 00:16:17.973 "name": "pt1", 00:16:17.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.973 "is_configured": true, 00:16:17.973 "data_offset": 2048, 00:16:17.973 "data_size": 63488 00:16:17.973 }, 00:16:17.973 { 00:16:17.973 "name": null, 00:16:17.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.973 "is_configured": false, 00:16:17.973 "data_offset": 0, 00:16:17.973 "data_size": 63488 00:16:17.973 }, 00:16:17.973 { 00:16:17.973 "name": null, 00:16:17.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.973 "is_configured": false, 00:16:17.973 "data_offset": 2048, 00:16:17.973 "data_size": 63488 00:16:17.973 }, 00:16:17.973 { 00:16:17.973 "name": null, 00:16:17.973 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.973 "is_configured": false, 00:16:17.973 "data_offset": 2048, 00:16:17.973 "data_size": 63488 00:16:17.973 } 00:16:17.973 ] 00:16:17.973 }' 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.973 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.231 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:18.231 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.231 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.231 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.231 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.490 [2024-11-27 04:38:05.858112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.490 [2024-11-27 04:38:05.858195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.490 [2024-11-27 04:38:05.858226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:18.490 [2024-11-27 04:38:05.858241] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.490 [2024-11-27 04:38:05.858801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.490 [2024-11-27 04:38:05.858832] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.490 [2024-11-27 04:38:05.858936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:18.490 [2024-11-27 04:38:05.858974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.490 pt2 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.490 [2024-11-27 04:38:05.866089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:18.490 [2024-11-27 04:38:05.866276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.490 [2024-11-27 04:38:05.866314] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:18.490 [2024-11-27 04:38:05.866329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.490 [2024-11-27 04:38:05.866814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.490 [2024-11-27 04:38:05.866848] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:18.490 [2024-11-27 04:38:05.866932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:18.490 [2024-11-27 04:38:05.866969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:18.490 pt3 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.490 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.490 [2024-11-27 04:38:05.874056] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:18.490 [2024-11-27 04:38:05.874231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.490 [2024-11-27 04:38:05.874303] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:18.490 [2024-11-27 04:38:05.874546] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.490 [2024-11-27 04:38:05.875097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.490 [2024-11-27 04:38:05.876346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:18.490 [2024-11-27 04:38:05.876561] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:18.490 [2024-11-27 04:38:05.876707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:18.490 [2024-11-27 04:38:05.877017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:18.490 [2024-11-27 04:38:05.877135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:18.490 [2024-11-27 04:38:05.877486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:18.491 [2024-11-27 04:38:05.877805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:18.491 [2024-11-27 04:38:05.877931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:18.491 [2024-11-27 04:38:05.878257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.491 pt4 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.491 "name": "raid_bdev1", 00:16:18.491 "uuid": "6570624f-c69e-4d20-9723-2d85acdb5046", 00:16:18.491 "strip_size_kb": 64, 00:16:18.491 "state": "online", 00:16:18.491 "raid_level": "concat", 00:16:18.491 "superblock": true, 00:16:18.491 "num_base_bdevs": 4, 00:16:18.491 "num_base_bdevs_discovered": 4, 00:16:18.491 "num_base_bdevs_operational": 4, 00:16:18.491 "base_bdevs_list": [ 00:16:18.491 { 00:16:18.491 "name": "pt1", 00:16:18.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.491 "is_configured": true, 00:16:18.491 "data_offset": 2048, 00:16:18.491 "data_size": 63488 00:16:18.491 }, 00:16:18.491 { 00:16:18.491 "name": "pt2", 00:16:18.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.491 "is_configured": true, 00:16:18.491 "data_offset": 2048, 00:16:18.491 "data_size": 63488 00:16:18.491 }, 00:16:18.491 { 00:16:18.491 "name": "pt3", 00:16:18.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.491 "is_configured": true, 00:16:18.491 "data_offset": 2048, 00:16:18.491 "data_size": 63488 00:16:18.491 }, 00:16:18.491 { 00:16:18.491 "name": "pt4", 00:16:18.491 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.491 "is_configured": true, 00:16:18.491 "data_offset": 2048, 00:16:18.491 "data_size": 63488 00:16:18.491 } 00:16:18.491 ] 00:16:18.491 }' 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.491 04:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.750 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:18.750 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:18.750 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:18.750 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:18.750 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:18.750 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:18.750 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:18.750 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.750 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.750 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:18.750 [2024-11-27 04:38:06.362759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.009 "name": "raid_bdev1", 00:16:19.009 "aliases": [ 00:16:19.009 "6570624f-c69e-4d20-9723-2d85acdb5046" 00:16:19.009 ], 00:16:19.009 "product_name": "Raid Volume", 00:16:19.009 "block_size": 512, 00:16:19.009 "num_blocks": 253952, 00:16:19.009 "uuid": "6570624f-c69e-4d20-9723-2d85acdb5046", 00:16:19.009 "assigned_rate_limits": { 00:16:19.009 "rw_ios_per_sec": 0, 00:16:19.009 "rw_mbytes_per_sec": 0, 00:16:19.009 "r_mbytes_per_sec": 0, 00:16:19.009 "w_mbytes_per_sec": 0 00:16:19.009 }, 00:16:19.009 "claimed": false, 00:16:19.009 "zoned": false, 00:16:19.009 "supported_io_types": { 00:16:19.009 "read": true, 00:16:19.009 "write": true, 00:16:19.009 "unmap": true, 00:16:19.009 "flush": true, 00:16:19.009 "reset": true, 00:16:19.009 "nvme_admin": false, 00:16:19.009 "nvme_io": false, 00:16:19.009 "nvme_io_md": false, 00:16:19.009 "write_zeroes": true, 00:16:19.009 "zcopy": false, 00:16:19.009 "get_zone_info": false, 00:16:19.009 "zone_management": false, 00:16:19.009 "zone_append": false, 00:16:19.009 "compare": false, 00:16:19.009 "compare_and_write": false, 00:16:19.009 "abort": false, 00:16:19.009 "seek_hole": false, 00:16:19.009 "seek_data": false, 00:16:19.009 "copy": false, 00:16:19.009 "nvme_iov_md": false 00:16:19.009 }, 00:16:19.009 "memory_domains": [ 00:16:19.009 { 00:16:19.009 "dma_device_id": "system", 00:16:19.009 "dma_device_type": 1 00:16:19.009 }, 00:16:19.009 { 00:16:19.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.009 "dma_device_type": 2 00:16:19.009 }, 00:16:19.009 { 00:16:19.009 "dma_device_id": "system", 00:16:19.009 "dma_device_type": 1 00:16:19.009 }, 00:16:19.009 { 00:16:19.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.009 "dma_device_type": 2 00:16:19.009 }, 00:16:19.009 { 00:16:19.009 "dma_device_id": "system", 00:16:19.009 "dma_device_type": 1 00:16:19.009 }, 00:16:19.009 { 00:16:19.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.009 "dma_device_type": 2 00:16:19.009 }, 00:16:19.009 { 00:16:19.009 "dma_device_id": "system", 00:16:19.009 "dma_device_type": 1 00:16:19.009 }, 00:16:19.009 { 00:16:19.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.009 "dma_device_type": 2 00:16:19.009 } 00:16:19.009 ], 00:16:19.009 "driver_specific": { 00:16:19.009 "raid": { 00:16:19.009 "uuid": "6570624f-c69e-4d20-9723-2d85acdb5046", 00:16:19.009 "strip_size_kb": 64, 00:16:19.009 "state": "online", 00:16:19.009 "raid_level": "concat", 00:16:19.009 "superblock": true, 00:16:19.009 "num_base_bdevs": 4, 00:16:19.009 "num_base_bdevs_discovered": 4, 00:16:19.009 "num_base_bdevs_operational": 4, 00:16:19.009 "base_bdevs_list": [ 00:16:19.009 { 00:16:19.009 "name": "pt1", 00:16:19.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.009 "is_configured": true, 00:16:19.009 "data_offset": 2048, 00:16:19.009 "data_size": 63488 00:16:19.009 }, 00:16:19.009 { 00:16:19.009 "name": "pt2", 00:16:19.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.009 "is_configured": true, 00:16:19.009 "data_offset": 2048, 00:16:19.009 "data_size": 63488 00:16:19.009 }, 00:16:19.009 { 00:16:19.009 "name": "pt3", 00:16:19.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.009 "is_configured": true, 00:16:19.009 "data_offset": 2048, 00:16:19.009 "data_size": 63488 00:16:19.009 }, 00:16:19.009 { 00:16:19.009 "name": "pt4", 00:16:19.009 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.009 "is_configured": true, 00:16:19.009 "data_offset": 2048, 00:16:19.009 "data_size": 63488 00:16:19.009 } 00:16:19.009 ] 00:16:19.009 } 00:16:19.009 } 00:16:19.009 }' 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:19.009 pt2 00:16:19.009 pt3 00:16:19.009 pt4' 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.009 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.267 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.268 [2024-11-27 04:38:06.730826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6570624f-c69e-4d20-9723-2d85acdb5046 '!=' 6570624f-c69e-4d20-9723-2d85acdb5046 ']' 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72870 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72870 ']' 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72870 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72870 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72870' 00:16:19.268 killing process with pid 72870 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72870 00:16:19.268 [2024-11-27 04:38:06.819553] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.268 04:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72870 00:16:19.268 [2024-11-27 04:38:06.819825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.268 [2024-11-27 04:38:06.820029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.268 [2024-11-27 04:38:06.820163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:19.834 [2024-11-27 04:38:07.170908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.767 04:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:20.767 00:16:20.767 real 0m5.920s 00:16:20.767 user 0m8.935s 00:16:20.767 sys 0m0.842s 00:16:20.767 04:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.767 04:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.767 ************************************ 00:16:20.767 END TEST raid_superblock_test 00:16:20.768 ************************************ 00:16:20.768 04:38:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:16:20.768 04:38:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:20.768 04:38:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.768 04:38:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.768 ************************************ 00:16:20.768 START TEST raid_read_error_test 00:16:20.768 ************************************ 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Jf4avQkRTX 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73140 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73140 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73140 ']' 00:16:20.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.768 04:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.027 [2024-11-27 04:38:08.391007] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:16:21.027 [2024-11-27 04:38:08.391181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73140 ] 00:16:21.027 [2024-11-27 04:38:08.575448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.286 [2024-11-27 04:38:08.706941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.545 [2024-11-27 04:38:08.910162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.545 [2024-11-27 04:38:08.911259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.806 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.806 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:21.806 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:21.806 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:21.806 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.806 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.806 BaseBdev1_malloc 00:16:21.806 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.806 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:21.806 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.806 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 true 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 [2024-11-27 04:38:09.431390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:22.066 [2024-11-27 04:38:09.431624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.066 [2024-11-27 04:38:09.431676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:22.066 [2024-11-27 04:38:09.431696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.066 [2024-11-27 04:38:09.434575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.066 BaseBdev1 00:16:22.066 [2024-11-27 04:38:09.434802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 BaseBdev2_malloc 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 true 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 [2024-11-27 04:38:09.494280] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:22.066 [2024-11-27 04:38:09.494543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.066 [2024-11-27 04:38:09.494613] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:22.066 [2024-11-27 04:38:09.494637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.066 [2024-11-27 04:38:09.497589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.066 [2024-11-27 04:38:09.497796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:22.066 BaseBdev2 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 BaseBdev3_malloc 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 true 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 [2024-11-27 04:38:09.567396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:22.066 [2024-11-27 04:38:09.567456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.066 [2024-11-27 04:38:09.567485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:22.066 [2024-11-27 04:38:09.567503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.066 [2024-11-27 04:38:09.570409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.066 [2024-11-27 04:38:09.570611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:22.066 BaseBdev3 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 BaseBdev4_malloc 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 true 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.066 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.066 [2024-11-27 04:38:09.627361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:22.066 [2024-11-27 04:38:09.627428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.066 [2024-11-27 04:38:09.627455] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:22.066 [2024-11-27 04:38:09.627472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.066 [2024-11-27 04:38:09.630393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.067 [2024-11-27 04:38:09.630446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:22.067 BaseBdev4 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.067 [2024-11-27 04:38:09.635448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.067 [2024-11-27 04:38:09.638121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.067 [2024-11-27 04:38:09.638228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.067 [2024-11-27 04:38:09.638328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:22.067 [2024-11-27 04:38:09.638638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:22.067 [2024-11-27 04:38:09.638677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:22.067 [2024-11-27 04:38:09.639039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:22.067 [2024-11-27 04:38:09.639255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:22.067 [2024-11-27 04:38:09.639273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:22.067 [2024-11-27 04:38:09.639513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.067 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.331 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.331 "name": "raid_bdev1", 00:16:22.331 "uuid": "8486cd35-0490-4ec5-b44f-cd9d0d2dba6c", 00:16:22.331 "strip_size_kb": 64, 00:16:22.331 "state": "online", 00:16:22.331 "raid_level": "concat", 00:16:22.331 "superblock": true, 00:16:22.331 "num_base_bdevs": 4, 00:16:22.331 "num_base_bdevs_discovered": 4, 00:16:22.331 "num_base_bdevs_operational": 4, 00:16:22.331 "base_bdevs_list": [ 00:16:22.331 { 00:16:22.331 "name": "BaseBdev1", 00:16:22.331 "uuid": "cab81b73-8a9d-54f4-ac5f-d48e338353a6", 00:16:22.331 "is_configured": true, 00:16:22.331 "data_offset": 2048, 00:16:22.331 "data_size": 63488 00:16:22.331 }, 00:16:22.331 { 00:16:22.331 "name": "BaseBdev2", 00:16:22.331 "uuid": "987303c6-ea26-5373-b922-7c7ebf3749a4", 00:16:22.331 "is_configured": true, 00:16:22.331 "data_offset": 2048, 00:16:22.331 "data_size": 63488 00:16:22.331 }, 00:16:22.331 { 00:16:22.331 "name": "BaseBdev3", 00:16:22.331 "uuid": "c02712a9-9288-56ad-9150-b94428cd5dcd", 00:16:22.331 "is_configured": true, 00:16:22.331 "data_offset": 2048, 00:16:22.331 "data_size": 63488 00:16:22.331 }, 00:16:22.331 { 00:16:22.331 "name": "BaseBdev4", 00:16:22.331 "uuid": "cd083b1f-d316-5f7b-95f0-1d049ec0c21f", 00:16:22.331 "is_configured": true, 00:16:22.331 "data_offset": 2048, 00:16:22.331 "data_size": 63488 00:16:22.331 } 00:16:22.331 ] 00:16:22.331 }' 00:16:22.331 04:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.331 04:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.590 04:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:22.590 04:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:22.848 [2024-11-27 04:38:10.269072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:23.780 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:23.780 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.780 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.780 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.780 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:23.780 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:23.780 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.781 "name": "raid_bdev1", 00:16:23.781 "uuid": "8486cd35-0490-4ec5-b44f-cd9d0d2dba6c", 00:16:23.781 "strip_size_kb": 64, 00:16:23.781 "state": "online", 00:16:23.781 "raid_level": "concat", 00:16:23.781 "superblock": true, 00:16:23.781 "num_base_bdevs": 4, 00:16:23.781 "num_base_bdevs_discovered": 4, 00:16:23.781 "num_base_bdevs_operational": 4, 00:16:23.781 "base_bdevs_list": [ 00:16:23.781 { 00:16:23.781 "name": "BaseBdev1", 00:16:23.781 "uuid": "cab81b73-8a9d-54f4-ac5f-d48e338353a6", 00:16:23.781 "is_configured": true, 00:16:23.781 "data_offset": 2048, 00:16:23.781 "data_size": 63488 00:16:23.781 }, 00:16:23.781 { 00:16:23.781 "name": "BaseBdev2", 00:16:23.781 "uuid": "987303c6-ea26-5373-b922-7c7ebf3749a4", 00:16:23.781 "is_configured": true, 00:16:23.781 "data_offset": 2048, 00:16:23.781 "data_size": 63488 00:16:23.781 }, 00:16:23.781 { 00:16:23.781 "name": "BaseBdev3", 00:16:23.781 "uuid": "c02712a9-9288-56ad-9150-b94428cd5dcd", 00:16:23.781 "is_configured": true, 00:16:23.781 "data_offset": 2048, 00:16:23.781 "data_size": 63488 00:16:23.781 }, 00:16:23.781 { 00:16:23.781 "name": "BaseBdev4", 00:16:23.781 "uuid": "cd083b1f-d316-5f7b-95f0-1d049ec0c21f", 00:16:23.781 "is_configured": true, 00:16:23.781 "data_offset": 2048, 00:16:23.781 "data_size": 63488 00:16:23.781 } 00:16:23.781 ] 00:16:23.781 }' 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.781 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.346 [2024-11-27 04:38:11.680160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.346 [2024-11-27 04:38:11.680200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.346 [2024-11-27 04:38:11.683584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.346 { 00:16:24.346 "results": [ 00:16:24.346 { 00:16:24.346 "job": "raid_bdev1", 00:16:24.346 "core_mask": "0x1", 00:16:24.346 "workload": "randrw", 00:16:24.346 "percentage": 50, 00:16:24.346 "status": "finished", 00:16:24.346 "queue_depth": 1, 00:16:24.346 "io_size": 131072, 00:16:24.346 "runtime": 1.408679, 00:16:24.346 "iops": 10453.055664207388, 00:16:24.346 "mibps": 1306.6319580259235, 00:16:24.346 "io_failed": 1, 00:16:24.346 "io_timeout": 0, 00:16:24.346 "avg_latency_us": 133.21074561999185, 00:16:24.346 "min_latency_us": 41.192727272727275, 00:16:24.346 "max_latency_us": 1832.0290909090909 00:16:24.346 } 00:16:24.346 ], 00:16:24.346 "core_count": 1 00:16:24.346 } 00:16:24.346 [2024-11-27 04:38:11.684152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.346 [2024-11-27 04:38:11.684230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.346 [2024-11-27 04:38:11.684251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73140 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73140 ']' 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73140 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73140 00:16:24.346 killing process with pid 73140 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73140' 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73140 00:16:24.346 [2024-11-27 04:38:11.718642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.346 04:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73140 00:16:24.603 [2024-11-27 04:38:12.012417] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.534 04:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Jf4avQkRTX 00:16:25.534 04:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:25.534 04:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:25.534 04:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:16:25.534 04:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:25.534 04:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:25.534 04:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:25.534 04:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:16:25.534 ************************************ 00:16:25.534 END TEST raid_read_error_test 00:16:25.534 ************************************ 00:16:25.534 00:16:25.534 real 0m4.861s 00:16:25.534 user 0m5.986s 00:16:25.534 sys 0m0.599s 00:16:25.534 04:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.534 04:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.850 04:38:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:16:25.850 04:38:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:25.850 04:38:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.850 04:38:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.850 ************************************ 00:16:25.850 START TEST raid_write_error_test 00:16:25.850 ************************************ 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3Sbs9Q2HmR 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73286 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73286 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73286 ']' 00:16:25.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.850 04:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.850 [2024-11-27 04:38:13.285683] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:16:25.850 [2024-11-27 04:38:13.285854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:16:26.108 [2024-11-27 04:38:13.461170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.108 [2024-11-27 04:38:13.596668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.366 [2024-11-27 04:38:13.800659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.366 [2024-11-27 04:38:13.800722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.624 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.624 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:26.624 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:26.624 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:26.624 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.624 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.624 BaseBdev1_malloc 00:16:26.624 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.882 true 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.882 [2024-11-27 04:38:14.258905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:26.882 [2024-11-27 04:38:14.259130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.882 [2024-11-27 04:38:14.259190] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:26.882 [2024-11-27 04:38:14.259226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.882 [2024-11-27 04:38:14.262132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.882 [2024-11-27 04:38:14.262184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:26.882 BaseBdev1 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.882 BaseBdev2_malloc 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.882 true 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.882 [2024-11-27 04:38:14.319035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:26.882 [2024-11-27 04:38:14.319241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.882 [2024-11-27 04:38:14.319441] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:26.882 [2024-11-27 04:38:14.319597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.882 [2024-11-27 04:38:14.322518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.882 [2024-11-27 04:38:14.322569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:26.882 BaseBdev2 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.882 BaseBdev3_malloc 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.882 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 true 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 [2024-11-27 04:38:14.389854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:26.883 [2024-11-27 04:38:14.390087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.883 [2024-11-27 04:38:14.390254] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:26.883 [2024-11-27 04:38:14.390304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.883 [2024-11-27 04:38:14.393262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.883 BaseBdev3 00:16:26.883 [2024-11-27 04:38:14.393433] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 BaseBdev4_malloc 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 true 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 [2024-11-27 04:38:14.450037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:26.883 [2024-11-27 04:38:14.450246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.883 [2024-11-27 04:38:14.450411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:26.883 [2024-11-27 04:38:14.450463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.883 [2024-11-27 04:38:14.453395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.883 [2024-11-27 04:38:14.453569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:26.883 BaseBdev4 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 [2024-11-27 04:38:14.458151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.883 [2024-11-27 04:38:14.460725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.883 [2024-11-27 04:38:14.461027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:26.883 [2024-11-27 04:38:14.461310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:26.883 [2024-11-27 04:38:14.461620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:26.883 [2024-11-27 04:38:14.461660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:26.883 [2024-11-27 04:38:14.462024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:26.883 [2024-11-27 04:38:14.462249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:26.883 [2024-11-27 04:38:14.462271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:26.883 [2024-11-27 04:38:14.462513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.141 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.141 "name": "raid_bdev1", 00:16:27.141 "uuid": "cccfadc0-4447-4fc4-ad08-6d2c4502f27e", 00:16:27.141 "strip_size_kb": 64, 00:16:27.141 "state": "online", 00:16:27.141 "raid_level": "concat", 00:16:27.141 "superblock": true, 00:16:27.141 "num_base_bdevs": 4, 00:16:27.141 "num_base_bdevs_discovered": 4, 00:16:27.141 "num_base_bdevs_operational": 4, 00:16:27.141 "base_bdevs_list": [ 00:16:27.141 { 00:16:27.141 "name": "BaseBdev1", 00:16:27.141 "uuid": "a6dedb1d-4c5a-592f-8494-85b969f42fbf", 00:16:27.141 "is_configured": true, 00:16:27.141 "data_offset": 2048, 00:16:27.141 "data_size": 63488 00:16:27.141 }, 00:16:27.141 { 00:16:27.141 "name": "BaseBdev2", 00:16:27.141 "uuid": "c65f15ee-e90a-547b-95b2-4834a2d70af3", 00:16:27.141 "is_configured": true, 00:16:27.141 "data_offset": 2048, 00:16:27.141 "data_size": 63488 00:16:27.141 }, 00:16:27.141 { 00:16:27.141 "name": "BaseBdev3", 00:16:27.141 "uuid": "686606f8-3ac2-51f8-a75f-61e3d2c4be16", 00:16:27.141 "is_configured": true, 00:16:27.141 "data_offset": 2048, 00:16:27.141 "data_size": 63488 00:16:27.141 }, 00:16:27.141 { 00:16:27.141 "name": "BaseBdev4", 00:16:27.141 "uuid": "734af138-19f8-5876-9339-051f3cd0c55d", 00:16:27.141 "is_configured": true, 00:16:27.141 "data_offset": 2048, 00:16:27.141 "data_size": 63488 00:16:27.141 } 00:16:27.141 ] 00:16:27.141 }' 00:16:27.141 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.141 04:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.399 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:27.399 04:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:27.657 [2024-11-27 04:38:15.064030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.592 04:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.592 04:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.592 "name": "raid_bdev1", 00:16:28.592 "uuid": "cccfadc0-4447-4fc4-ad08-6d2c4502f27e", 00:16:28.592 "strip_size_kb": 64, 00:16:28.592 "state": "online", 00:16:28.592 "raid_level": "concat", 00:16:28.592 "superblock": true, 00:16:28.592 "num_base_bdevs": 4, 00:16:28.592 "num_base_bdevs_discovered": 4, 00:16:28.592 "num_base_bdevs_operational": 4, 00:16:28.592 "base_bdevs_list": [ 00:16:28.592 { 00:16:28.592 "name": "BaseBdev1", 00:16:28.592 "uuid": "a6dedb1d-4c5a-592f-8494-85b969f42fbf", 00:16:28.592 "is_configured": true, 00:16:28.592 "data_offset": 2048, 00:16:28.592 "data_size": 63488 00:16:28.592 }, 00:16:28.592 { 00:16:28.592 "name": "BaseBdev2", 00:16:28.592 "uuid": "c65f15ee-e90a-547b-95b2-4834a2d70af3", 00:16:28.592 "is_configured": true, 00:16:28.592 "data_offset": 2048, 00:16:28.592 "data_size": 63488 00:16:28.592 }, 00:16:28.592 { 00:16:28.592 "name": "BaseBdev3", 00:16:28.592 "uuid": "686606f8-3ac2-51f8-a75f-61e3d2c4be16", 00:16:28.592 "is_configured": true, 00:16:28.592 "data_offset": 2048, 00:16:28.592 "data_size": 63488 00:16:28.592 }, 00:16:28.592 { 00:16:28.592 "name": "BaseBdev4", 00:16:28.592 "uuid": "734af138-19f8-5876-9339-051f3cd0c55d", 00:16:28.592 "is_configured": true, 00:16:28.592 "data_offset": 2048, 00:16:28.592 "data_size": 63488 00:16:28.592 } 00:16:28.592 ] 00:16:28.592 }' 00:16:28.592 04:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.592 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.851 04:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:28.851 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.851 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.851 [2024-11-27 04:38:16.450375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:28.851 [2024-11-27 04:38:16.450426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.851 [2024-11-27 04:38:16.453827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.851 [2024-11-27 04:38:16.453914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.851 [2024-11-27 04:38:16.453986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.851 [2024-11-27 04:38:16.454004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:28.851 { 00:16:28.851 "results": [ 00:16:28.851 { 00:16:28.851 "job": "raid_bdev1", 00:16:28.851 "core_mask": "0x1", 00:16:28.851 "workload": "randrw", 00:16:28.851 "percentage": 50, 00:16:28.851 "status": "finished", 00:16:28.851 "queue_depth": 1, 00:16:28.851 "io_size": 131072, 00:16:28.851 "runtime": 1.383808, 00:16:28.851 "iops": 10489.894551845342, 00:16:28.851 "mibps": 1311.2368189806677, 00:16:28.851 "io_failed": 1, 00:16:28.851 "io_timeout": 0, 00:16:28.851 "avg_latency_us": 132.8315477152179, 00:16:28.851 "min_latency_us": 41.89090909090909, 00:16:28.851 "max_latency_us": 1817.1345454545456 00:16:28.851 } 00:16:28.851 ], 00:16:28.851 "core_count": 1 00:16:28.851 } 00:16:28.851 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.851 04:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73286 00:16:28.851 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73286 ']' 00:16:28.851 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73286 00:16:28.851 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:28.851 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.851 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73286 00:16:29.109 killing process with pid 73286 00:16:29.109 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.109 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.109 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73286' 00:16:29.109 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73286 00:16:29.109 [2024-11-27 04:38:16.486754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.109 04:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73286 00:16:29.367 [2024-11-27 04:38:16.779485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:30.302 04:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:30.302 04:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3Sbs9Q2HmR 00:16:30.302 04:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:30.302 04:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:16:30.302 04:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:30.302 04:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:30.302 04:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:30.302 04:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:16:30.302 00:16:30.302 real 0m4.695s 00:16:30.302 user 0m5.702s 00:16:30.302 sys 0m0.577s 00:16:30.302 04:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.302 ************************************ 00:16:30.302 END TEST raid_write_error_test 00:16:30.302 ************************************ 00:16:30.302 04:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.302 04:38:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:30.302 04:38:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:16:30.302 04:38:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:30.302 04:38:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.302 04:38:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:30.560 ************************************ 00:16:30.560 START TEST raid_state_function_test 00:16:30.560 ************************************ 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:30.560 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:30.561 Process raid pid: 73434 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73434 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73434' 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73434 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73434 ']' 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.561 04:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.561 [2024-11-27 04:38:18.028186] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:16:30.561 [2024-11-27 04:38:18.028400] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.818 [2024-11-27 04:38:18.200662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.818 [2024-11-27 04:38:18.332136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.077 [2024-11-27 04:38:18.538888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.077 [2024-11-27 04:38:18.539123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.644 [2024-11-27 04:38:19.077593] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.644 [2024-11-27 04:38:19.077800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.644 [2024-11-27 04:38:19.077830] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.644 [2024-11-27 04:38:19.077849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.644 [2024-11-27 04:38:19.077860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.644 [2024-11-27 04:38:19.077884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.644 [2024-11-27 04:38:19.077896] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:31.644 [2024-11-27 04:38:19.077911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.644 "name": "Existed_Raid", 00:16:31.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.644 "strip_size_kb": 0, 00:16:31.644 "state": "configuring", 00:16:31.644 "raid_level": "raid1", 00:16:31.644 "superblock": false, 00:16:31.644 "num_base_bdevs": 4, 00:16:31.644 "num_base_bdevs_discovered": 0, 00:16:31.644 "num_base_bdevs_operational": 4, 00:16:31.644 "base_bdevs_list": [ 00:16:31.644 { 00:16:31.644 "name": "BaseBdev1", 00:16:31.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.644 "is_configured": false, 00:16:31.644 "data_offset": 0, 00:16:31.644 "data_size": 0 00:16:31.644 }, 00:16:31.644 { 00:16:31.644 "name": "BaseBdev2", 00:16:31.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.644 "is_configured": false, 00:16:31.644 "data_offset": 0, 00:16:31.644 "data_size": 0 00:16:31.644 }, 00:16:31.644 { 00:16:31.644 "name": "BaseBdev3", 00:16:31.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.644 "is_configured": false, 00:16:31.644 "data_offset": 0, 00:16:31.644 "data_size": 0 00:16:31.644 }, 00:16:31.644 { 00:16:31.644 "name": "BaseBdev4", 00:16:31.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.644 "is_configured": false, 00:16:31.644 "data_offset": 0, 00:16:31.644 "data_size": 0 00:16:31.644 } 00:16:31.644 ] 00:16:31.644 }' 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.644 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.211 [2024-11-27 04:38:19.577686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.211 [2024-11-27 04:38:19.577734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.211 [2024-11-27 04:38:19.585663] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.211 [2024-11-27 04:38:19.585852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.211 [2024-11-27 04:38:19.585901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.211 [2024-11-27 04:38:19.585920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.211 [2024-11-27 04:38:19.585930] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.211 [2024-11-27 04:38:19.585944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.211 [2024-11-27 04:38:19.585954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:32.211 [2024-11-27 04:38:19.585968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.211 [2024-11-27 04:38:19.630621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.211 BaseBdev1 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.211 [ 00:16:32.211 { 00:16:32.211 "name": "BaseBdev1", 00:16:32.211 "aliases": [ 00:16:32.211 "0d7c23a3-b559-4f6b-8db2-5f97b6f38bdb" 00:16:32.211 ], 00:16:32.211 "product_name": "Malloc disk", 00:16:32.211 "block_size": 512, 00:16:32.211 "num_blocks": 65536, 00:16:32.211 "uuid": "0d7c23a3-b559-4f6b-8db2-5f97b6f38bdb", 00:16:32.211 "assigned_rate_limits": { 00:16:32.211 "rw_ios_per_sec": 0, 00:16:32.211 "rw_mbytes_per_sec": 0, 00:16:32.211 "r_mbytes_per_sec": 0, 00:16:32.211 "w_mbytes_per_sec": 0 00:16:32.211 }, 00:16:32.211 "claimed": true, 00:16:32.211 "claim_type": "exclusive_write", 00:16:32.211 "zoned": false, 00:16:32.211 "supported_io_types": { 00:16:32.211 "read": true, 00:16:32.211 "write": true, 00:16:32.211 "unmap": true, 00:16:32.211 "flush": true, 00:16:32.211 "reset": true, 00:16:32.211 "nvme_admin": false, 00:16:32.211 "nvme_io": false, 00:16:32.211 "nvme_io_md": false, 00:16:32.211 "write_zeroes": true, 00:16:32.211 "zcopy": true, 00:16:32.211 "get_zone_info": false, 00:16:32.211 "zone_management": false, 00:16:32.211 "zone_append": false, 00:16:32.211 "compare": false, 00:16:32.211 "compare_and_write": false, 00:16:32.211 "abort": true, 00:16:32.211 "seek_hole": false, 00:16:32.211 "seek_data": false, 00:16:32.211 "copy": true, 00:16:32.211 "nvme_iov_md": false 00:16:32.211 }, 00:16:32.211 "memory_domains": [ 00:16:32.211 { 00:16:32.211 "dma_device_id": "system", 00:16:32.211 "dma_device_type": 1 00:16:32.211 }, 00:16:32.211 { 00:16:32.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.211 "dma_device_type": 2 00:16:32.211 } 00:16:32.211 ], 00:16:32.211 "driver_specific": {} 00:16:32.211 } 00:16:32.211 ] 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.211 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.211 "name": "Existed_Raid", 00:16:32.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.211 "strip_size_kb": 0, 00:16:32.211 "state": "configuring", 00:16:32.211 "raid_level": "raid1", 00:16:32.211 "superblock": false, 00:16:32.211 "num_base_bdevs": 4, 00:16:32.211 "num_base_bdevs_discovered": 1, 00:16:32.211 "num_base_bdevs_operational": 4, 00:16:32.212 "base_bdevs_list": [ 00:16:32.212 { 00:16:32.212 "name": "BaseBdev1", 00:16:32.212 "uuid": "0d7c23a3-b559-4f6b-8db2-5f97b6f38bdb", 00:16:32.212 "is_configured": true, 00:16:32.212 "data_offset": 0, 00:16:32.212 "data_size": 65536 00:16:32.212 }, 00:16:32.212 { 00:16:32.212 "name": "BaseBdev2", 00:16:32.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.212 "is_configured": false, 00:16:32.212 "data_offset": 0, 00:16:32.212 "data_size": 0 00:16:32.212 }, 00:16:32.212 { 00:16:32.212 "name": "BaseBdev3", 00:16:32.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.212 "is_configured": false, 00:16:32.212 "data_offset": 0, 00:16:32.212 "data_size": 0 00:16:32.212 }, 00:16:32.212 { 00:16:32.212 "name": "BaseBdev4", 00:16:32.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.212 "is_configured": false, 00:16:32.212 "data_offset": 0, 00:16:32.212 "data_size": 0 00:16:32.212 } 00:16:32.212 ] 00:16:32.212 }' 00:16:32.212 04:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.212 04:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.779 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:32.779 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.779 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.779 [2024-11-27 04:38:20.166862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.779 [2024-11-27 04:38:20.166926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:32.779 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.779 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:32.779 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.779 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.779 [2024-11-27 04:38:20.178894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.780 [2024-11-27 04:38:20.181370] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.780 [2024-11-27 04:38:20.181425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.780 [2024-11-27 04:38:20.181441] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.780 [2024-11-27 04:38:20.181467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.780 [2024-11-27 04:38:20.181477] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:32.780 [2024-11-27 04:38:20.181491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.780 "name": "Existed_Raid", 00:16:32.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.780 "strip_size_kb": 0, 00:16:32.780 "state": "configuring", 00:16:32.780 "raid_level": "raid1", 00:16:32.780 "superblock": false, 00:16:32.780 "num_base_bdevs": 4, 00:16:32.780 "num_base_bdevs_discovered": 1, 00:16:32.780 "num_base_bdevs_operational": 4, 00:16:32.780 "base_bdevs_list": [ 00:16:32.780 { 00:16:32.780 "name": "BaseBdev1", 00:16:32.780 "uuid": "0d7c23a3-b559-4f6b-8db2-5f97b6f38bdb", 00:16:32.780 "is_configured": true, 00:16:32.780 "data_offset": 0, 00:16:32.780 "data_size": 65536 00:16:32.780 }, 00:16:32.780 { 00:16:32.780 "name": "BaseBdev2", 00:16:32.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.780 "is_configured": false, 00:16:32.780 "data_offset": 0, 00:16:32.780 "data_size": 0 00:16:32.780 }, 00:16:32.780 { 00:16:32.780 "name": "BaseBdev3", 00:16:32.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.780 "is_configured": false, 00:16:32.780 "data_offset": 0, 00:16:32.780 "data_size": 0 00:16:32.780 }, 00:16:32.780 { 00:16:32.780 "name": "BaseBdev4", 00:16:32.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.780 "is_configured": false, 00:16:32.780 "data_offset": 0, 00:16:32.780 "data_size": 0 00:16:32.780 } 00:16:32.780 ] 00:16:32.780 }' 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.780 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.347 [2024-11-27 04:38:20.733540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.347 BaseBdev2 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.347 [ 00:16:33.347 { 00:16:33.347 "name": "BaseBdev2", 00:16:33.347 "aliases": [ 00:16:33.347 "787c04c7-18b0-4e89-91ec-9c94e6da1a93" 00:16:33.347 ], 00:16:33.347 "product_name": "Malloc disk", 00:16:33.347 "block_size": 512, 00:16:33.347 "num_blocks": 65536, 00:16:33.347 "uuid": "787c04c7-18b0-4e89-91ec-9c94e6da1a93", 00:16:33.347 "assigned_rate_limits": { 00:16:33.347 "rw_ios_per_sec": 0, 00:16:33.347 "rw_mbytes_per_sec": 0, 00:16:33.347 "r_mbytes_per_sec": 0, 00:16:33.347 "w_mbytes_per_sec": 0 00:16:33.347 }, 00:16:33.347 "claimed": true, 00:16:33.347 "claim_type": "exclusive_write", 00:16:33.347 "zoned": false, 00:16:33.347 "supported_io_types": { 00:16:33.347 "read": true, 00:16:33.347 "write": true, 00:16:33.347 "unmap": true, 00:16:33.347 "flush": true, 00:16:33.347 "reset": true, 00:16:33.347 "nvme_admin": false, 00:16:33.347 "nvme_io": false, 00:16:33.347 "nvme_io_md": false, 00:16:33.347 "write_zeroes": true, 00:16:33.347 "zcopy": true, 00:16:33.347 "get_zone_info": false, 00:16:33.347 "zone_management": false, 00:16:33.347 "zone_append": false, 00:16:33.347 "compare": false, 00:16:33.347 "compare_and_write": false, 00:16:33.347 "abort": true, 00:16:33.347 "seek_hole": false, 00:16:33.347 "seek_data": false, 00:16:33.347 "copy": true, 00:16:33.347 "nvme_iov_md": false 00:16:33.347 }, 00:16:33.347 "memory_domains": [ 00:16:33.347 { 00:16:33.347 "dma_device_id": "system", 00:16:33.347 "dma_device_type": 1 00:16:33.347 }, 00:16:33.347 { 00:16:33.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.347 "dma_device_type": 2 00:16:33.347 } 00:16:33.347 ], 00:16:33.347 "driver_specific": {} 00:16:33.347 } 00:16:33.347 ] 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.347 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.347 "name": "Existed_Raid", 00:16:33.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.347 "strip_size_kb": 0, 00:16:33.347 "state": "configuring", 00:16:33.347 "raid_level": "raid1", 00:16:33.347 "superblock": false, 00:16:33.347 "num_base_bdevs": 4, 00:16:33.347 "num_base_bdevs_discovered": 2, 00:16:33.347 "num_base_bdevs_operational": 4, 00:16:33.347 "base_bdevs_list": [ 00:16:33.347 { 00:16:33.347 "name": "BaseBdev1", 00:16:33.347 "uuid": "0d7c23a3-b559-4f6b-8db2-5f97b6f38bdb", 00:16:33.347 "is_configured": true, 00:16:33.347 "data_offset": 0, 00:16:33.347 "data_size": 65536 00:16:33.347 }, 00:16:33.347 { 00:16:33.347 "name": "BaseBdev2", 00:16:33.347 "uuid": "787c04c7-18b0-4e89-91ec-9c94e6da1a93", 00:16:33.347 "is_configured": true, 00:16:33.347 "data_offset": 0, 00:16:33.347 "data_size": 65536 00:16:33.347 }, 00:16:33.347 { 00:16:33.347 "name": "BaseBdev3", 00:16:33.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.347 "is_configured": false, 00:16:33.347 "data_offset": 0, 00:16:33.347 "data_size": 0 00:16:33.347 }, 00:16:33.347 { 00:16:33.347 "name": "BaseBdev4", 00:16:33.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.348 "is_configured": false, 00:16:33.348 "data_offset": 0, 00:16:33.348 "data_size": 0 00:16:33.348 } 00:16:33.348 ] 00:16:33.348 }' 00:16:33.348 04:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.348 04:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.915 [2024-11-27 04:38:21.304443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.915 BaseBdev3 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.915 [ 00:16:33.915 { 00:16:33.915 "name": "BaseBdev3", 00:16:33.915 "aliases": [ 00:16:33.915 "22688807-d0f3-4573-853a-ab402cac4314" 00:16:33.915 ], 00:16:33.915 "product_name": "Malloc disk", 00:16:33.915 "block_size": 512, 00:16:33.915 "num_blocks": 65536, 00:16:33.915 "uuid": "22688807-d0f3-4573-853a-ab402cac4314", 00:16:33.915 "assigned_rate_limits": { 00:16:33.915 "rw_ios_per_sec": 0, 00:16:33.915 "rw_mbytes_per_sec": 0, 00:16:33.915 "r_mbytes_per_sec": 0, 00:16:33.915 "w_mbytes_per_sec": 0 00:16:33.915 }, 00:16:33.915 "claimed": true, 00:16:33.915 "claim_type": "exclusive_write", 00:16:33.915 "zoned": false, 00:16:33.915 "supported_io_types": { 00:16:33.915 "read": true, 00:16:33.915 "write": true, 00:16:33.915 "unmap": true, 00:16:33.915 "flush": true, 00:16:33.915 "reset": true, 00:16:33.915 "nvme_admin": false, 00:16:33.915 "nvme_io": false, 00:16:33.915 "nvme_io_md": false, 00:16:33.915 "write_zeroes": true, 00:16:33.915 "zcopy": true, 00:16:33.915 "get_zone_info": false, 00:16:33.915 "zone_management": false, 00:16:33.915 "zone_append": false, 00:16:33.915 "compare": false, 00:16:33.915 "compare_and_write": false, 00:16:33.915 "abort": true, 00:16:33.915 "seek_hole": false, 00:16:33.915 "seek_data": false, 00:16:33.915 "copy": true, 00:16:33.915 "nvme_iov_md": false 00:16:33.915 }, 00:16:33.915 "memory_domains": [ 00:16:33.915 { 00:16:33.915 "dma_device_id": "system", 00:16:33.915 "dma_device_type": 1 00:16:33.915 }, 00:16:33.915 { 00:16:33.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.915 "dma_device_type": 2 00:16:33.915 } 00:16:33.915 ], 00:16:33.915 "driver_specific": {} 00:16:33.915 } 00:16:33.915 ] 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.915 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.915 "name": "Existed_Raid", 00:16:33.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.915 "strip_size_kb": 0, 00:16:33.915 "state": "configuring", 00:16:33.915 "raid_level": "raid1", 00:16:33.915 "superblock": false, 00:16:33.915 "num_base_bdevs": 4, 00:16:33.915 "num_base_bdevs_discovered": 3, 00:16:33.915 "num_base_bdevs_operational": 4, 00:16:33.915 "base_bdevs_list": [ 00:16:33.915 { 00:16:33.915 "name": "BaseBdev1", 00:16:33.915 "uuid": "0d7c23a3-b559-4f6b-8db2-5f97b6f38bdb", 00:16:33.915 "is_configured": true, 00:16:33.915 "data_offset": 0, 00:16:33.915 "data_size": 65536 00:16:33.915 }, 00:16:33.915 { 00:16:33.915 "name": "BaseBdev2", 00:16:33.916 "uuid": "787c04c7-18b0-4e89-91ec-9c94e6da1a93", 00:16:33.916 "is_configured": true, 00:16:33.916 "data_offset": 0, 00:16:33.916 "data_size": 65536 00:16:33.916 }, 00:16:33.916 { 00:16:33.916 "name": "BaseBdev3", 00:16:33.916 "uuid": "22688807-d0f3-4573-853a-ab402cac4314", 00:16:33.916 "is_configured": true, 00:16:33.916 "data_offset": 0, 00:16:33.916 "data_size": 65536 00:16:33.916 }, 00:16:33.916 { 00:16:33.916 "name": "BaseBdev4", 00:16:33.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.916 "is_configured": false, 00:16:33.916 "data_offset": 0, 00:16:33.916 "data_size": 0 00:16:33.916 } 00:16:33.916 ] 00:16:33.916 }' 00:16:33.916 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.916 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.482 [2024-11-27 04:38:21.888004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:34.482 [2024-11-27 04:38:21.888276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:34.482 [2024-11-27 04:38:21.888331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:34.482 [2024-11-27 04:38:21.888806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:34.482 [2024-11-27 04:38:21.889174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:34.482 [2024-11-27 04:38:21.889313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:34.482 [2024-11-27 04:38:21.889763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.482 BaseBdev4 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.482 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.482 [ 00:16:34.482 { 00:16:34.482 "name": "BaseBdev4", 00:16:34.482 "aliases": [ 00:16:34.482 "faa8193d-7fa9-4612-b537-0c8dc35033ea" 00:16:34.482 ], 00:16:34.482 "product_name": "Malloc disk", 00:16:34.482 "block_size": 512, 00:16:34.482 "num_blocks": 65536, 00:16:34.482 "uuid": "faa8193d-7fa9-4612-b537-0c8dc35033ea", 00:16:34.482 "assigned_rate_limits": { 00:16:34.482 "rw_ios_per_sec": 0, 00:16:34.482 "rw_mbytes_per_sec": 0, 00:16:34.482 "r_mbytes_per_sec": 0, 00:16:34.482 "w_mbytes_per_sec": 0 00:16:34.482 }, 00:16:34.482 "claimed": true, 00:16:34.482 "claim_type": "exclusive_write", 00:16:34.482 "zoned": false, 00:16:34.482 "supported_io_types": { 00:16:34.482 "read": true, 00:16:34.482 "write": true, 00:16:34.482 "unmap": true, 00:16:34.482 "flush": true, 00:16:34.482 "reset": true, 00:16:34.482 "nvme_admin": false, 00:16:34.482 "nvme_io": false, 00:16:34.482 "nvme_io_md": false, 00:16:34.482 "write_zeroes": true, 00:16:34.482 "zcopy": true, 00:16:34.482 "get_zone_info": false, 00:16:34.482 "zone_management": false, 00:16:34.482 "zone_append": false, 00:16:34.482 "compare": false, 00:16:34.482 "compare_and_write": false, 00:16:34.482 "abort": true, 00:16:34.482 "seek_hole": false, 00:16:34.482 "seek_data": false, 00:16:34.482 "copy": true, 00:16:34.483 "nvme_iov_md": false 00:16:34.483 }, 00:16:34.483 "memory_domains": [ 00:16:34.483 { 00:16:34.483 "dma_device_id": "system", 00:16:34.483 "dma_device_type": 1 00:16:34.483 }, 00:16:34.483 { 00:16:34.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.483 "dma_device_type": 2 00:16:34.483 } 00:16:34.483 ], 00:16:34.483 "driver_specific": {} 00:16:34.483 } 00:16:34.483 ] 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.483 "name": "Existed_Raid", 00:16:34.483 "uuid": "4ee92b64-aa15-40db-b2da-f9dfbbab1f40", 00:16:34.483 "strip_size_kb": 0, 00:16:34.483 "state": "online", 00:16:34.483 "raid_level": "raid1", 00:16:34.483 "superblock": false, 00:16:34.483 "num_base_bdevs": 4, 00:16:34.483 "num_base_bdevs_discovered": 4, 00:16:34.483 "num_base_bdevs_operational": 4, 00:16:34.483 "base_bdevs_list": [ 00:16:34.483 { 00:16:34.483 "name": "BaseBdev1", 00:16:34.483 "uuid": "0d7c23a3-b559-4f6b-8db2-5f97b6f38bdb", 00:16:34.483 "is_configured": true, 00:16:34.483 "data_offset": 0, 00:16:34.483 "data_size": 65536 00:16:34.483 }, 00:16:34.483 { 00:16:34.483 "name": "BaseBdev2", 00:16:34.483 "uuid": "787c04c7-18b0-4e89-91ec-9c94e6da1a93", 00:16:34.483 "is_configured": true, 00:16:34.483 "data_offset": 0, 00:16:34.483 "data_size": 65536 00:16:34.483 }, 00:16:34.483 { 00:16:34.483 "name": "BaseBdev3", 00:16:34.483 "uuid": "22688807-d0f3-4573-853a-ab402cac4314", 00:16:34.483 "is_configured": true, 00:16:34.483 "data_offset": 0, 00:16:34.483 "data_size": 65536 00:16:34.483 }, 00:16:34.483 { 00:16:34.483 "name": "BaseBdev4", 00:16:34.483 "uuid": "faa8193d-7fa9-4612-b537-0c8dc35033ea", 00:16:34.483 "is_configured": true, 00:16:34.483 "data_offset": 0, 00:16:34.483 "data_size": 65536 00:16:34.483 } 00:16:34.483 ] 00:16:34.483 }' 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.483 04:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.050 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:35.050 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:35.050 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.050 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.050 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.050 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.050 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.051 [2024-11-27 04:38:22.476657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:35.051 "name": "Existed_Raid", 00:16:35.051 "aliases": [ 00:16:35.051 "4ee92b64-aa15-40db-b2da-f9dfbbab1f40" 00:16:35.051 ], 00:16:35.051 "product_name": "Raid Volume", 00:16:35.051 "block_size": 512, 00:16:35.051 "num_blocks": 65536, 00:16:35.051 "uuid": "4ee92b64-aa15-40db-b2da-f9dfbbab1f40", 00:16:35.051 "assigned_rate_limits": { 00:16:35.051 "rw_ios_per_sec": 0, 00:16:35.051 "rw_mbytes_per_sec": 0, 00:16:35.051 "r_mbytes_per_sec": 0, 00:16:35.051 "w_mbytes_per_sec": 0 00:16:35.051 }, 00:16:35.051 "claimed": false, 00:16:35.051 "zoned": false, 00:16:35.051 "supported_io_types": { 00:16:35.051 "read": true, 00:16:35.051 "write": true, 00:16:35.051 "unmap": false, 00:16:35.051 "flush": false, 00:16:35.051 "reset": true, 00:16:35.051 "nvme_admin": false, 00:16:35.051 "nvme_io": false, 00:16:35.051 "nvme_io_md": false, 00:16:35.051 "write_zeroes": true, 00:16:35.051 "zcopy": false, 00:16:35.051 "get_zone_info": false, 00:16:35.051 "zone_management": false, 00:16:35.051 "zone_append": false, 00:16:35.051 "compare": false, 00:16:35.051 "compare_and_write": false, 00:16:35.051 "abort": false, 00:16:35.051 "seek_hole": false, 00:16:35.051 "seek_data": false, 00:16:35.051 "copy": false, 00:16:35.051 "nvme_iov_md": false 00:16:35.051 }, 00:16:35.051 "memory_domains": [ 00:16:35.051 { 00:16:35.051 "dma_device_id": "system", 00:16:35.051 "dma_device_type": 1 00:16:35.051 }, 00:16:35.051 { 00:16:35.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.051 "dma_device_type": 2 00:16:35.051 }, 00:16:35.051 { 00:16:35.051 "dma_device_id": "system", 00:16:35.051 "dma_device_type": 1 00:16:35.051 }, 00:16:35.051 { 00:16:35.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.051 "dma_device_type": 2 00:16:35.051 }, 00:16:35.051 { 00:16:35.051 "dma_device_id": "system", 00:16:35.051 "dma_device_type": 1 00:16:35.051 }, 00:16:35.051 { 00:16:35.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.051 "dma_device_type": 2 00:16:35.051 }, 00:16:35.051 { 00:16:35.051 "dma_device_id": "system", 00:16:35.051 "dma_device_type": 1 00:16:35.051 }, 00:16:35.051 { 00:16:35.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.051 "dma_device_type": 2 00:16:35.051 } 00:16:35.051 ], 00:16:35.051 "driver_specific": { 00:16:35.051 "raid": { 00:16:35.051 "uuid": "4ee92b64-aa15-40db-b2da-f9dfbbab1f40", 00:16:35.051 "strip_size_kb": 0, 00:16:35.051 "state": "online", 00:16:35.051 "raid_level": "raid1", 00:16:35.051 "superblock": false, 00:16:35.051 "num_base_bdevs": 4, 00:16:35.051 "num_base_bdevs_discovered": 4, 00:16:35.051 "num_base_bdevs_operational": 4, 00:16:35.051 "base_bdevs_list": [ 00:16:35.051 { 00:16:35.051 "name": "BaseBdev1", 00:16:35.051 "uuid": "0d7c23a3-b559-4f6b-8db2-5f97b6f38bdb", 00:16:35.051 "is_configured": true, 00:16:35.051 "data_offset": 0, 00:16:35.051 "data_size": 65536 00:16:35.051 }, 00:16:35.051 { 00:16:35.051 "name": "BaseBdev2", 00:16:35.051 "uuid": "787c04c7-18b0-4e89-91ec-9c94e6da1a93", 00:16:35.051 "is_configured": true, 00:16:35.051 "data_offset": 0, 00:16:35.051 "data_size": 65536 00:16:35.051 }, 00:16:35.051 { 00:16:35.051 "name": "BaseBdev3", 00:16:35.051 "uuid": "22688807-d0f3-4573-853a-ab402cac4314", 00:16:35.051 "is_configured": true, 00:16:35.051 "data_offset": 0, 00:16:35.051 "data_size": 65536 00:16:35.051 }, 00:16:35.051 { 00:16:35.051 "name": "BaseBdev4", 00:16:35.051 "uuid": "faa8193d-7fa9-4612-b537-0c8dc35033ea", 00:16:35.051 "is_configured": true, 00:16:35.051 "data_offset": 0, 00:16:35.051 "data_size": 65536 00:16:35.051 } 00:16:35.051 ] 00:16:35.051 } 00:16:35.051 } 00:16:35.051 }' 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:35.051 BaseBdev2 00:16:35.051 BaseBdev3 00:16:35.051 BaseBdev4' 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.051 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.311 [2024-11-27 04:38:22.820347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.311 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.570 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.570 "name": "Existed_Raid", 00:16:35.570 "uuid": "4ee92b64-aa15-40db-b2da-f9dfbbab1f40", 00:16:35.570 "strip_size_kb": 0, 00:16:35.570 "state": "online", 00:16:35.570 "raid_level": "raid1", 00:16:35.570 "superblock": false, 00:16:35.570 "num_base_bdevs": 4, 00:16:35.570 "num_base_bdevs_discovered": 3, 00:16:35.570 "num_base_bdevs_operational": 3, 00:16:35.570 "base_bdevs_list": [ 00:16:35.570 { 00:16:35.570 "name": null, 00:16:35.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.570 "is_configured": false, 00:16:35.570 "data_offset": 0, 00:16:35.570 "data_size": 65536 00:16:35.570 }, 00:16:35.570 { 00:16:35.570 "name": "BaseBdev2", 00:16:35.570 "uuid": "787c04c7-18b0-4e89-91ec-9c94e6da1a93", 00:16:35.570 "is_configured": true, 00:16:35.570 "data_offset": 0, 00:16:35.570 "data_size": 65536 00:16:35.570 }, 00:16:35.570 { 00:16:35.570 "name": "BaseBdev3", 00:16:35.570 "uuid": "22688807-d0f3-4573-853a-ab402cac4314", 00:16:35.570 "is_configured": true, 00:16:35.570 "data_offset": 0, 00:16:35.570 "data_size": 65536 00:16:35.570 }, 00:16:35.570 { 00:16:35.570 "name": "BaseBdev4", 00:16:35.570 "uuid": "faa8193d-7fa9-4612-b537-0c8dc35033ea", 00:16:35.570 "is_configured": true, 00:16:35.570 "data_offset": 0, 00:16:35.570 "data_size": 65536 00:16:35.570 } 00:16:35.570 ] 00:16:35.570 }' 00:16:35.570 04:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.570 04:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.830 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:35.830 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:35.830 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.830 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.830 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:35.830 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.830 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.090 [2024-11-27 04:38:23.456337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.090 [2024-11-27 04:38:23.589550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:36.090 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.349 [2024-11-27 04:38:23.729281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:36.349 [2024-11-27 04:38:23.729403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.349 [2024-11-27 04:38:23.814388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.349 [2024-11-27 04:38:23.814471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.349 [2024-11-27 04:38:23.814493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.349 BaseBdev2 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.349 [ 00:16:36.349 { 00:16:36.349 "name": "BaseBdev2", 00:16:36.349 "aliases": [ 00:16:36.349 "f9c5346e-32e3-46d5-9bc4-844ab09b2f49" 00:16:36.349 ], 00:16:36.349 "product_name": "Malloc disk", 00:16:36.349 "block_size": 512, 00:16:36.349 "num_blocks": 65536, 00:16:36.349 "uuid": "f9c5346e-32e3-46d5-9bc4-844ab09b2f49", 00:16:36.349 "assigned_rate_limits": { 00:16:36.349 "rw_ios_per_sec": 0, 00:16:36.349 "rw_mbytes_per_sec": 0, 00:16:36.349 "r_mbytes_per_sec": 0, 00:16:36.349 "w_mbytes_per_sec": 0 00:16:36.349 }, 00:16:36.349 "claimed": false, 00:16:36.349 "zoned": false, 00:16:36.349 "supported_io_types": { 00:16:36.349 "read": true, 00:16:36.349 "write": true, 00:16:36.349 "unmap": true, 00:16:36.349 "flush": true, 00:16:36.349 "reset": true, 00:16:36.349 "nvme_admin": false, 00:16:36.349 "nvme_io": false, 00:16:36.349 "nvme_io_md": false, 00:16:36.349 "write_zeroes": true, 00:16:36.349 "zcopy": true, 00:16:36.349 "get_zone_info": false, 00:16:36.349 "zone_management": false, 00:16:36.349 "zone_append": false, 00:16:36.349 "compare": false, 00:16:36.349 "compare_and_write": false, 00:16:36.349 "abort": true, 00:16:36.349 "seek_hole": false, 00:16:36.349 "seek_data": false, 00:16:36.349 "copy": true, 00:16:36.349 "nvme_iov_md": false 00:16:36.349 }, 00:16:36.349 "memory_domains": [ 00:16:36.349 { 00:16:36.349 "dma_device_id": "system", 00:16:36.349 "dma_device_type": 1 00:16:36.349 }, 00:16:36.349 { 00:16:36.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.349 "dma_device_type": 2 00:16:36.349 } 00:16:36.349 ], 00:16:36.349 "driver_specific": {} 00:16:36.349 } 00:16:36.349 ] 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:36.349 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:36.350 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:36.350 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:36.350 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.350 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.609 BaseBdev3 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.609 04:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.609 [ 00:16:36.609 { 00:16:36.609 "name": "BaseBdev3", 00:16:36.609 "aliases": [ 00:16:36.609 "d9f3aefd-d298-4267-95e5-b707616227aa" 00:16:36.609 ], 00:16:36.609 "product_name": "Malloc disk", 00:16:36.609 "block_size": 512, 00:16:36.609 "num_blocks": 65536, 00:16:36.609 "uuid": "d9f3aefd-d298-4267-95e5-b707616227aa", 00:16:36.609 "assigned_rate_limits": { 00:16:36.609 "rw_ios_per_sec": 0, 00:16:36.609 "rw_mbytes_per_sec": 0, 00:16:36.609 "r_mbytes_per_sec": 0, 00:16:36.609 "w_mbytes_per_sec": 0 00:16:36.609 }, 00:16:36.609 "claimed": false, 00:16:36.609 "zoned": false, 00:16:36.609 "supported_io_types": { 00:16:36.609 "read": true, 00:16:36.609 "write": true, 00:16:36.609 "unmap": true, 00:16:36.609 "flush": true, 00:16:36.609 "reset": true, 00:16:36.609 "nvme_admin": false, 00:16:36.609 "nvme_io": false, 00:16:36.609 "nvme_io_md": false, 00:16:36.609 "write_zeroes": true, 00:16:36.609 "zcopy": true, 00:16:36.609 "get_zone_info": false, 00:16:36.609 "zone_management": false, 00:16:36.609 "zone_append": false, 00:16:36.609 "compare": false, 00:16:36.609 "compare_and_write": false, 00:16:36.609 "abort": true, 00:16:36.609 "seek_hole": false, 00:16:36.609 "seek_data": false, 00:16:36.609 "copy": true, 00:16:36.609 "nvme_iov_md": false 00:16:36.609 }, 00:16:36.609 "memory_domains": [ 00:16:36.609 { 00:16:36.609 "dma_device_id": "system", 00:16:36.609 "dma_device_type": 1 00:16:36.609 }, 00:16:36.609 { 00:16:36.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.609 "dma_device_type": 2 00:16:36.609 } 00:16:36.609 ], 00:16:36.609 "driver_specific": {} 00:16:36.609 } 00:16:36.609 ] 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.609 BaseBdev4 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.609 [ 00:16:36.609 { 00:16:36.609 "name": "BaseBdev4", 00:16:36.609 "aliases": [ 00:16:36.609 "fe21abc5-89df-46ba-a91f-ed41af51494b" 00:16:36.609 ], 00:16:36.609 "product_name": "Malloc disk", 00:16:36.609 "block_size": 512, 00:16:36.609 "num_blocks": 65536, 00:16:36.609 "uuid": "fe21abc5-89df-46ba-a91f-ed41af51494b", 00:16:36.609 "assigned_rate_limits": { 00:16:36.609 "rw_ios_per_sec": 0, 00:16:36.609 "rw_mbytes_per_sec": 0, 00:16:36.609 "r_mbytes_per_sec": 0, 00:16:36.609 "w_mbytes_per_sec": 0 00:16:36.609 }, 00:16:36.609 "claimed": false, 00:16:36.609 "zoned": false, 00:16:36.609 "supported_io_types": { 00:16:36.609 "read": true, 00:16:36.609 "write": true, 00:16:36.609 "unmap": true, 00:16:36.609 "flush": true, 00:16:36.609 "reset": true, 00:16:36.609 "nvme_admin": false, 00:16:36.609 "nvme_io": false, 00:16:36.609 "nvme_io_md": false, 00:16:36.609 "write_zeroes": true, 00:16:36.609 "zcopy": true, 00:16:36.609 "get_zone_info": false, 00:16:36.609 "zone_management": false, 00:16:36.609 "zone_append": false, 00:16:36.609 "compare": false, 00:16:36.609 "compare_and_write": false, 00:16:36.609 "abort": true, 00:16:36.609 "seek_hole": false, 00:16:36.609 "seek_data": false, 00:16:36.609 "copy": true, 00:16:36.609 "nvme_iov_md": false 00:16:36.609 }, 00:16:36.609 "memory_domains": [ 00:16:36.609 { 00:16:36.609 "dma_device_id": "system", 00:16:36.609 "dma_device_type": 1 00:16:36.609 }, 00:16:36.609 { 00:16:36.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.609 "dma_device_type": 2 00:16:36.609 } 00:16:36.609 ], 00:16:36.609 "driver_specific": {} 00:16:36.609 } 00:16:36.609 ] 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.609 [2024-11-27 04:38:24.110388] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.609 [2024-11-27 04:38:24.110451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.609 [2024-11-27 04:38:24.110481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:36.609 [2024-11-27 04:38:24.112871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.609 [2024-11-27 04:38:24.112941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.609 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.610 "name": "Existed_Raid", 00:16:36.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.610 "strip_size_kb": 0, 00:16:36.610 "state": "configuring", 00:16:36.610 "raid_level": "raid1", 00:16:36.610 "superblock": false, 00:16:36.610 "num_base_bdevs": 4, 00:16:36.610 "num_base_bdevs_discovered": 3, 00:16:36.610 "num_base_bdevs_operational": 4, 00:16:36.610 "base_bdevs_list": [ 00:16:36.610 { 00:16:36.610 "name": "BaseBdev1", 00:16:36.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.610 "is_configured": false, 00:16:36.610 "data_offset": 0, 00:16:36.610 "data_size": 0 00:16:36.610 }, 00:16:36.610 { 00:16:36.610 "name": "BaseBdev2", 00:16:36.610 "uuid": "f9c5346e-32e3-46d5-9bc4-844ab09b2f49", 00:16:36.610 "is_configured": true, 00:16:36.610 "data_offset": 0, 00:16:36.610 "data_size": 65536 00:16:36.610 }, 00:16:36.610 { 00:16:36.610 "name": "BaseBdev3", 00:16:36.610 "uuid": "d9f3aefd-d298-4267-95e5-b707616227aa", 00:16:36.610 "is_configured": true, 00:16:36.610 "data_offset": 0, 00:16:36.610 "data_size": 65536 00:16:36.610 }, 00:16:36.610 { 00:16:36.610 "name": "BaseBdev4", 00:16:36.610 "uuid": "fe21abc5-89df-46ba-a91f-ed41af51494b", 00:16:36.610 "is_configured": true, 00:16:36.610 "data_offset": 0, 00:16:36.610 "data_size": 65536 00:16:36.610 } 00:16:36.610 ] 00:16:36.610 }' 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.610 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.176 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:37.176 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.176 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.176 [2024-11-27 04:38:24.630565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.176 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.176 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:37.176 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.176 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.176 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.176 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.177 "name": "Existed_Raid", 00:16:37.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.177 "strip_size_kb": 0, 00:16:37.177 "state": "configuring", 00:16:37.177 "raid_level": "raid1", 00:16:37.177 "superblock": false, 00:16:37.177 "num_base_bdevs": 4, 00:16:37.177 "num_base_bdevs_discovered": 2, 00:16:37.177 "num_base_bdevs_operational": 4, 00:16:37.177 "base_bdevs_list": [ 00:16:37.177 { 00:16:37.177 "name": "BaseBdev1", 00:16:37.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.177 "is_configured": false, 00:16:37.177 "data_offset": 0, 00:16:37.177 "data_size": 0 00:16:37.177 }, 00:16:37.177 { 00:16:37.177 "name": null, 00:16:37.177 "uuid": "f9c5346e-32e3-46d5-9bc4-844ab09b2f49", 00:16:37.177 "is_configured": false, 00:16:37.177 "data_offset": 0, 00:16:37.177 "data_size": 65536 00:16:37.177 }, 00:16:37.177 { 00:16:37.177 "name": "BaseBdev3", 00:16:37.177 "uuid": "d9f3aefd-d298-4267-95e5-b707616227aa", 00:16:37.177 "is_configured": true, 00:16:37.177 "data_offset": 0, 00:16:37.177 "data_size": 65536 00:16:37.177 }, 00:16:37.177 { 00:16:37.177 "name": "BaseBdev4", 00:16:37.177 "uuid": "fe21abc5-89df-46ba-a91f-ed41af51494b", 00:16:37.177 "is_configured": true, 00:16:37.177 "data_offset": 0, 00:16:37.177 "data_size": 65536 00:16:37.177 } 00:16:37.177 ] 00:16:37.177 }' 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.177 04:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.745 [2024-11-27 04:38:25.228744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.745 BaseBdev1 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.745 [ 00:16:37.745 { 00:16:37.745 "name": "BaseBdev1", 00:16:37.745 "aliases": [ 00:16:37.745 "43f39cce-a2e0-42d8-b869-7386f3d2a0b4" 00:16:37.745 ], 00:16:37.745 "product_name": "Malloc disk", 00:16:37.745 "block_size": 512, 00:16:37.745 "num_blocks": 65536, 00:16:37.745 "uuid": "43f39cce-a2e0-42d8-b869-7386f3d2a0b4", 00:16:37.745 "assigned_rate_limits": { 00:16:37.745 "rw_ios_per_sec": 0, 00:16:37.745 "rw_mbytes_per_sec": 0, 00:16:37.745 "r_mbytes_per_sec": 0, 00:16:37.745 "w_mbytes_per_sec": 0 00:16:37.745 }, 00:16:37.745 "claimed": true, 00:16:37.745 "claim_type": "exclusive_write", 00:16:37.745 "zoned": false, 00:16:37.745 "supported_io_types": { 00:16:37.745 "read": true, 00:16:37.745 "write": true, 00:16:37.745 "unmap": true, 00:16:37.745 "flush": true, 00:16:37.745 "reset": true, 00:16:37.745 "nvme_admin": false, 00:16:37.745 "nvme_io": false, 00:16:37.745 "nvme_io_md": false, 00:16:37.745 "write_zeroes": true, 00:16:37.745 "zcopy": true, 00:16:37.745 "get_zone_info": false, 00:16:37.745 "zone_management": false, 00:16:37.745 "zone_append": false, 00:16:37.745 "compare": false, 00:16:37.745 "compare_and_write": false, 00:16:37.745 "abort": true, 00:16:37.745 "seek_hole": false, 00:16:37.745 "seek_data": false, 00:16:37.745 "copy": true, 00:16:37.745 "nvme_iov_md": false 00:16:37.745 }, 00:16:37.745 "memory_domains": [ 00:16:37.745 { 00:16:37.745 "dma_device_id": "system", 00:16:37.745 "dma_device_type": 1 00:16:37.745 }, 00:16:37.745 { 00:16:37.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.745 "dma_device_type": 2 00:16:37.745 } 00:16:37.745 ], 00:16:37.745 "driver_specific": {} 00:16:37.745 } 00:16:37.745 ] 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.745 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.746 "name": "Existed_Raid", 00:16:37.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.746 "strip_size_kb": 0, 00:16:37.746 "state": "configuring", 00:16:37.746 "raid_level": "raid1", 00:16:37.746 "superblock": false, 00:16:37.746 "num_base_bdevs": 4, 00:16:37.746 "num_base_bdevs_discovered": 3, 00:16:37.746 "num_base_bdevs_operational": 4, 00:16:37.746 "base_bdevs_list": [ 00:16:37.746 { 00:16:37.746 "name": "BaseBdev1", 00:16:37.746 "uuid": "43f39cce-a2e0-42d8-b869-7386f3d2a0b4", 00:16:37.746 "is_configured": true, 00:16:37.746 "data_offset": 0, 00:16:37.746 "data_size": 65536 00:16:37.746 }, 00:16:37.746 { 00:16:37.746 "name": null, 00:16:37.746 "uuid": "f9c5346e-32e3-46d5-9bc4-844ab09b2f49", 00:16:37.746 "is_configured": false, 00:16:37.746 "data_offset": 0, 00:16:37.746 "data_size": 65536 00:16:37.746 }, 00:16:37.746 { 00:16:37.746 "name": "BaseBdev3", 00:16:37.746 "uuid": "d9f3aefd-d298-4267-95e5-b707616227aa", 00:16:37.746 "is_configured": true, 00:16:37.746 "data_offset": 0, 00:16:37.746 "data_size": 65536 00:16:37.746 }, 00:16:37.746 { 00:16:37.746 "name": "BaseBdev4", 00:16:37.746 "uuid": "fe21abc5-89df-46ba-a91f-ed41af51494b", 00:16:37.746 "is_configured": true, 00:16:37.746 "data_offset": 0, 00:16:37.746 "data_size": 65536 00:16:37.746 } 00:16:37.746 ] 00:16:37.746 }' 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.746 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.313 [2024-11-27 04:38:25.829028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.313 "name": "Existed_Raid", 00:16:38.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.313 "strip_size_kb": 0, 00:16:38.313 "state": "configuring", 00:16:38.313 "raid_level": "raid1", 00:16:38.313 "superblock": false, 00:16:38.313 "num_base_bdevs": 4, 00:16:38.313 "num_base_bdevs_discovered": 2, 00:16:38.313 "num_base_bdevs_operational": 4, 00:16:38.313 "base_bdevs_list": [ 00:16:38.313 { 00:16:38.313 "name": "BaseBdev1", 00:16:38.313 "uuid": "43f39cce-a2e0-42d8-b869-7386f3d2a0b4", 00:16:38.313 "is_configured": true, 00:16:38.313 "data_offset": 0, 00:16:38.313 "data_size": 65536 00:16:38.313 }, 00:16:38.313 { 00:16:38.313 "name": null, 00:16:38.313 "uuid": "f9c5346e-32e3-46d5-9bc4-844ab09b2f49", 00:16:38.313 "is_configured": false, 00:16:38.313 "data_offset": 0, 00:16:38.313 "data_size": 65536 00:16:38.313 }, 00:16:38.313 { 00:16:38.313 "name": null, 00:16:38.313 "uuid": "d9f3aefd-d298-4267-95e5-b707616227aa", 00:16:38.313 "is_configured": false, 00:16:38.313 "data_offset": 0, 00:16:38.313 "data_size": 65536 00:16:38.313 }, 00:16:38.313 { 00:16:38.313 "name": "BaseBdev4", 00:16:38.313 "uuid": "fe21abc5-89df-46ba-a91f-ed41af51494b", 00:16:38.313 "is_configured": true, 00:16:38.313 "data_offset": 0, 00:16:38.313 "data_size": 65536 00:16:38.313 } 00:16:38.313 ] 00:16:38.313 }' 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.313 04:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.879 [2024-11-27 04:38:26.389169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.879 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.879 "name": "Existed_Raid", 00:16:38.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.879 "strip_size_kb": 0, 00:16:38.879 "state": "configuring", 00:16:38.879 "raid_level": "raid1", 00:16:38.879 "superblock": false, 00:16:38.879 "num_base_bdevs": 4, 00:16:38.879 "num_base_bdevs_discovered": 3, 00:16:38.879 "num_base_bdevs_operational": 4, 00:16:38.879 "base_bdevs_list": [ 00:16:38.879 { 00:16:38.879 "name": "BaseBdev1", 00:16:38.879 "uuid": "43f39cce-a2e0-42d8-b869-7386f3d2a0b4", 00:16:38.879 "is_configured": true, 00:16:38.879 "data_offset": 0, 00:16:38.879 "data_size": 65536 00:16:38.879 }, 00:16:38.879 { 00:16:38.879 "name": null, 00:16:38.879 "uuid": "f9c5346e-32e3-46d5-9bc4-844ab09b2f49", 00:16:38.879 "is_configured": false, 00:16:38.879 "data_offset": 0, 00:16:38.879 "data_size": 65536 00:16:38.879 }, 00:16:38.879 { 00:16:38.879 "name": "BaseBdev3", 00:16:38.879 "uuid": "d9f3aefd-d298-4267-95e5-b707616227aa", 00:16:38.879 "is_configured": true, 00:16:38.879 "data_offset": 0, 00:16:38.879 "data_size": 65536 00:16:38.879 }, 00:16:38.879 { 00:16:38.879 "name": "BaseBdev4", 00:16:38.879 "uuid": "fe21abc5-89df-46ba-a91f-ed41af51494b", 00:16:38.880 "is_configured": true, 00:16:38.880 "data_offset": 0, 00:16:38.880 "data_size": 65536 00:16:38.880 } 00:16:38.880 ] 00:16:38.880 }' 00:16:38.880 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.880 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.446 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.446 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.446 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:39.446 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.446 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.446 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:39.446 04:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:39.446 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.446 04:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.446 [2024-11-27 04:38:26.973328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.446 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.705 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.705 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.705 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.705 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.705 "name": "Existed_Raid", 00:16:39.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.705 "strip_size_kb": 0, 00:16:39.705 "state": "configuring", 00:16:39.705 "raid_level": "raid1", 00:16:39.705 "superblock": false, 00:16:39.705 "num_base_bdevs": 4, 00:16:39.705 "num_base_bdevs_discovered": 2, 00:16:39.705 "num_base_bdevs_operational": 4, 00:16:39.705 "base_bdevs_list": [ 00:16:39.705 { 00:16:39.705 "name": null, 00:16:39.705 "uuid": "43f39cce-a2e0-42d8-b869-7386f3d2a0b4", 00:16:39.705 "is_configured": false, 00:16:39.705 "data_offset": 0, 00:16:39.705 "data_size": 65536 00:16:39.705 }, 00:16:39.705 { 00:16:39.705 "name": null, 00:16:39.705 "uuid": "f9c5346e-32e3-46d5-9bc4-844ab09b2f49", 00:16:39.705 "is_configured": false, 00:16:39.705 "data_offset": 0, 00:16:39.705 "data_size": 65536 00:16:39.705 }, 00:16:39.705 { 00:16:39.705 "name": "BaseBdev3", 00:16:39.705 "uuid": "d9f3aefd-d298-4267-95e5-b707616227aa", 00:16:39.705 "is_configured": true, 00:16:39.705 "data_offset": 0, 00:16:39.705 "data_size": 65536 00:16:39.705 }, 00:16:39.705 { 00:16:39.705 "name": "BaseBdev4", 00:16:39.705 "uuid": "fe21abc5-89df-46ba-a91f-ed41af51494b", 00:16:39.705 "is_configured": true, 00:16:39.705 "data_offset": 0, 00:16:39.705 "data_size": 65536 00:16:39.705 } 00:16:39.705 ] 00:16:39.705 }' 00:16:39.705 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.705 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.963 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.963 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.963 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.963 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.222 [2024-11-27 04:38:27.630107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.222 "name": "Existed_Raid", 00:16:40.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.222 "strip_size_kb": 0, 00:16:40.222 "state": "configuring", 00:16:40.222 "raid_level": "raid1", 00:16:40.222 "superblock": false, 00:16:40.222 "num_base_bdevs": 4, 00:16:40.222 "num_base_bdevs_discovered": 3, 00:16:40.222 "num_base_bdevs_operational": 4, 00:16:40.222 "base_bdevs_list": [ 00:16:40.222 { 00:16:40.222 "name": null, 00:16:40.222 "uuid": "43f39cce-a2e0-42d8-b869-7386f3d2a0b4", 00:16:40.222 "is_configured": false, 00:16:40.222 "data_offset": 0, 00:16:40.222 "data_size": 65536 00:16:40.222 }, 00:16:40.222 { 00:16:40.222 "name": "BaseBdev2", 00:16:40.222 "uuid": "f9c5346e-32e3-46d5-9bc4-844ab09b2f49", 00:16:40.222 "is_configured": true, 00:16:40.222 "data_offset": 0, 00:16:40.222 "data_size": 65536 00:16:40.222 }, 00:16:40.222 { 00:16:40.222 "name": "BaseBdev3", 00:16:40.222 "uuid": "d9f3aefd-d298-4267-95e5-b707616227aa", 00:16:40.222 "is_configured": true, 00:16:40.222 "data_offset": 0, 00:16:40.222 "data_size": 65536 00:16:40.222 }, 00:16:40.222 { 00:16:40.222 "name": "BaseBdev4", 00:16:40.222 "uuid": "fe21abc5-89df-46ba-a91f-ed41af51494b", 00:16:40.222 "is_configured": true, 00:16:40.222 "data_offset": 0, 00:16:40.222 "data_size": 65536 00:16:40.222 } 00:16:40.222 ] 00:16:40.222 }' 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.222 04:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 43f39cce-a2e0-42d8-b869-7386f3d2a0b4 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.789 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.789 [2024-11-27 04:38:28.308496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:40.789 [2024-11-27 04:38:28.308552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:40.789 [2024-11-27 04:38:28.308568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:40.790 [2024-11-27 04:38:28.308932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:40.790 [2024-11-27 04:38:28.309136] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:40.790 [2024-11-27 04:38:28.309153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:40.790 [2024-11-27 04:38:28.309460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.790 NewBaseBdev 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.790 [ 00:16:40.790 { 00:16:40.790 "name": "NewBaseBdev", 00:16:40.790 "aliases": [ 00:16:40.790 "43f39cce-a2e0-42d8-b869-7386f3d2a0b4" 00:16:40.790 ], 00:16:40.790 "product_name": "Malloc disk", 00:16:40.790 "block_size": 512, 00:16:40.790 "num_blocks": 65536, 00:16:40.790 "uuid": "43f39cce-a2e0-42d8-b869-7386f3d2a0b4", 00:16:40.790 "assigned_rate_limits": { 00:16:40.790 "rw_ios_per_sec": 0, 00:16:40.790 "rw_mbytes_per_sec": 0, 00:16:40.790 "r_mbytes_per_sec": 0, 00:16:40.790 "w_mbytes_per_sec": 0 00:16:40.790 }, 00:16:40.790 "claimed": true, 00:16:40.790 "claim_type": "exclusive_write", 00:16:40.790 "zoned": false, 00:16:40.790 "supported_io_types": { 00:16:40.790 "read": true, 00:16:40.790 "write": true, 00:16:40.790 "unmap": true, 00:16:40.790 "flush": true, 00:16:40.790 "reset": true, 00:16:40.790 "nvme_admin": false, 00:16:40.790 "nvme_io": false, 00:16:40.790 "nvme_io_md": false, 00:16:40.790 "write_zeroes": true, 00:16:40.790 "zcopy": true, 00:16:40.790 "get_zone_info": false, 00:16:40.790 "zone_management": false, 00:16:40.790 "zone_append": false, 00:16:40.790 "compare": false, 00:16:40.790 "compare_and_write": false, 00:16:40.790 "abort": true, 00:16:40.790 "seek_hole": false, 00:16:40.790 "seek_data": false, 00:16:40.790 "copy": true, 00:16:40.790 "nvme_iov_md": false 00:16:40.790 }, 00:16:40.790 "memory_domains": [ 00:16:40.790 { 00:16:40.790 "dma_device_id": "system", 00:16:40.790 "dma_device_type": 1 00:16:40.790 }, 00:16:40.790 { 00:16:40.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.790 "dma_device_type": 2 00:16:40.790 } 00:16:40.790 ], 00:16:40.790 "driver_specific": {} 00:16:40.790 } 00:16:40.790 ] 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.790 "name": "Existed_Raid", 00:16:40.790 "uuid": "5e827af8-2cc1-4382-9107-3749b6be51dd", 00:16:40.790 "strip_size_kb": 0, 00:16:40.790 "state": "online", 00:16:40.790 "raid_level": "raid1", 00:16:40.790 "superblock": false, 00:16:40.790 "num_base_bdevs": 4, 00:16:40.790 "num_base_bdevs_discovered": 4, 00:16:40.790 "num_base_bdevs_operational": 4, 00:16:40.790 "base_bdevs_list": [ 00:16:40.790 { 00:16:40.790 "name": "NewBaseBdev", 00:16:40.790 "uuid": "43f39cce-a2e0-42d8-b869-7386f3d2a0b4", 00:16:40.790 "is_configured": true, 00:16:40.790 "data_offset": 0, 00:16:40.790 "data_size": 65536 00:16:40.790 }, 00:16:40.790 { 00:16:40.790 "name": "BaseBdev2", 00:16:40.790 "uuid": "f9c5346e-32e3-46d5-9bc4-844ab09b2f49", 00:16:40.790 "is_configured": true, 00:16:40.790 "data_offset": 0, 00:16:40.790 "data_size": 65536 00:16:40.790 }, 00:16:40.790 { 00:16:40.790 "name": "BaseBdev3", 00:16:40.790 "uuid": "d9f3aefd-d298-4267-95e5-b707616227aa", 00:16:40.790 "is_configured": true, 00:16:40.790 "data_offset": 0, 00:16:40.790 "data_size": 65536 00:16:40.790 }, 00:16:40.790 { 00:16:40.790 "name": "BaseBdev4", 00:16:40.790 "uuid": "fe21abc5-89df-46ba-a91f-ed41af51494b", 00:16:40.790 "is_configured": true, 00:16:40.790 "data_offset": 0, 00:16:40.790 "data_size": 65536 00:16:40.790 } 00:16:40.790 ] 00:16:40.790 }' 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.790 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.357 [2024-11-27 04:38:28.889131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:41.357 "name": "Existed_Raid", 00:16:41.357 "aliases": [ 00:16:41.357 "5e827af8-2cc1-4382-9107-3749b6be51dd" 00:16:41.357 ], 00:16:41.357 "product_name": "Raid Volume", 00:16:41.357 "block_size": 512, 00:16:41.357 "num_blocks": 65536, 00:16:41.357 "uuid": "5e827af8-2cc1-4382-9107-3749b6be51dd", 00:16:41.357 "assigned_rate_limits": { 00:16:41.357 "rw_ios_per_sec": 0, 00:16:41.357 "rw_mbytes_per_sec": 0, 00:16:41.357 "r_mbytes_per_sec": 0, 00:16:41.357 "w_mbytes_per_sec": 0 00:16:41.357 }, 00:16:41.357 "claimed": false, 00:16:41.357 "zoned": false, 00:16:41.357 "supported_io_types": { 00:16:41.357 "read": true, 00:16:41.357 "write": true, 00:16:41.357 "unmap": false, 00:16:41.357 "flush": false, 00:16:41.357 "reset": true, 00:16:41.357 "nvme_admin": false, 00:16:41.357 "nvme_io": false, 00:16:41.357 "nvme_io_md": false, 00:16:41.357 "write_zeroes": true, 00:16:41.357 "zcopy": false, 00:16:41.357 "get_zone_info": false, 00:16:41.357 "zone_management": false, 00:16:41.357 "zone_append": false, 00:16:41.357 "compare": false, 00:16:41.357 "compare_and_write": false, 00:16:41.357 "abort": false, 00:16:41.357 "seek_hole": false, 00:16:41.357 "seek_data": false, 00:16:41.357 "copy": false, 00:16:41.357 "nvme_iov_md": false 00:16:41.357 }, 00:16:41.357 "memory_domains": [ 00:16:41.357 { 00:16:41.357 "dma_device_id": "system", 00:16:41.357 "dma_device_type": 1 00:16:41.357 }, 00:16:41.357 { 00:16:41.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.357 "dma_device_type": 2 00:16:41.357 }, 00:16:41.357 { 00:16:41.357 "dma_device_id": "system", 00:16:41.357 "dma_device_type": 1 00:16:41.357 }, 00:16:41.357 { 00:16:41.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.357 "dma_device_type": 2 00:16:41.357 }, 00:16:41.357 { 00:16:41.357 "dma_device_id": "system", 00:16:41.357 "dma_device_type": 1 00:16:41.357 }, 00:16:41.357 { 00:16:41.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.357 "dma_device_type": 2 00:16:41.357 }, 00:16:41.357 { 00:16:41.357 "dma_device_id": "system", 00:16:41.357 "dma_device_type": 1 00:16:41.357 }, 00:16:41.357 { 00:16:41.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.357 "dma_device_type": 2 00:16:41.357 } 00:16:41.357 ], 00:16:41.357 "driver_specific": { 00:16:41.357 "raid": { 00:16:41.357 "uuid": "5e827af8-2cc1-4382-9107-3749b6be51dd", 00:16:41.357 "strip_size_kb": 0, 00:16:41.357 "state": "online", 00:16:41.357 "raid_level": "raid1", 00:16:41.357 "superblock": false, 00:16:41.357 "num_base_bdevs": 4, 00:16:41.357 "num_base_bdevs_discovered": 4, 00:16:41.357 "num_base_bdevs_operational": 4, 00:16:41.357 "base_bdevs_list": [ 00:16:41.357 { 00:16:41.357 "name": "NewBaseBdev", 00:16:41.357 "uuid": "43f39cce-a2e0-42d8-b869-7386f3d2a0b4", 00:16:41.357 "is_configured": true, 00:16:41.357 "data_offset": 0, 00:16:41.357 "data_size": 65536 00:16:41.357 }, 00:16:41.357 { 00:16:41.357 "name": "BaseBdev2", 00:16:41.357 "uuid": "f9c5346e-32e3-46d5-9bc4-844ab09b2f49", 00:16:41.357 "is_configured": true, 00:16:41.357 "data_offset": 0, 00:16:41.357 "data_size": 65536 00:16:41.357 }, 00:16:41.357 { 00:16:41.357 "name": "BaseBdev3", 00:16:41.357 "uuid": "d9f3aefd-d298-4267-95e5-b707616227aa", 00:16:41.357 "is_configured": true, 00:16:41.357 "data_offset": 0, 00:16:41.357 "data_size": 65536 00:16:41.357 }, 00:16:41.357 { 00:16:41.357 "name": "BaseBdev4", 00:16:41.357 "uuid": "fe21abc5-89df-46ba-a91f-ed41af51494b", 00:16:41.357 "is_configured": true, 00:16:41.357 "data_offset": 0, 00:16:41.357 "data_size": 65536 00:16:41.357 } 00:16:41.357 ] 00:16:41.357 } 00:16:41.357 } 00:16:41.357 }' 00:16:41.357 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:41.616 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:41.616 BaseBdev2 00:16:41.616 BaseBdev3 00:16:41.616 BaseBdev4' 00:16:41.616 04:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.875 [2024-11-27 04:38:29.244754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:41.875 [2024-11-27 04:38:29.244801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.875 [2024-11-27 04:38:29.244906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.875 [2024-11-27 04:38:29.245277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.875 [2024-11-27 04:38:29.245301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73434 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73434 ']' 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73434 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73434 00:16:41.875 killing process with pid 73434 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.875 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.876 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73434' 00:16:41.876 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73434 00:16:41.876 [2024-11-27 04:38:29.286199] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.876 04:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73434 00:16:42.134 [2024-11-27 04:38:29.632811] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.067 ************************************ 00:16:43.067 END TEST raid_state_function_test 00:16:43.067 ************************************ 00:16:43.067 04:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:43.067 00:16:43.067 real 0m12.751s 00:16:43.067 user 0m21.203s 00:16:43.067 sys 0m1.676s 00:16:43.067 04:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.067 04:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.325 04:38:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:16:43.325 04:38:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:43.325 04:38:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.325 04:38:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.325 ************************************ 00:16:43.325 START TEST raid_state_function_test_sb 00:16:43.325 ************************************ 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.325 Process raid pid: 74112 00:16:43.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74112 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74112' 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74112 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74112 ']' 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.325 04:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.325 [2024-11-27 04:38:30.822476] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:16:43.325 [2024-11-27 04:38:30.822873] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.583 [2024-11-27 04:38:30.995363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.583 [2024-11-27 04:38:31.129022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.841 [2024-11-27 04:38:31.335906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.841 [2024-11-27 04:38:31.336151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.407 04:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.407 04:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:44.407 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:44.407 04:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.407 04:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.407 [2024-11-27 04:38:31.820939] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.407 [2024-11-27 04:38:31.821143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.407 [2024-11-27 04:38:31.821293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.407 [2024-11-27 04:38:31.821471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.408 [2024-11-27 04:38:31.821630] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:44.408 [2024-11-27 04:38:31.821814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:44.408 [2024-11-27 04:38:31.821978] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:44.408 [2024-11-27 04:38:31.822172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.408 "name": "Existed_Raid", 00:16:44.408 "uuid": "06648f0c-929f-40cb-85f8-e2c74a12c8cb", 00:16:44.408 "strip_size_kb": 0, 00:16:44.408 "state": "configuring", 00:16:44.408 "raid_level": "raid1", 00:16:44.408 "superblock": true, 00:16:44.408 "num_base_bdevs": 4, 00:16:44.408 "num_base_bdevs_discovered": 0, 00:16:44.408 "num_base_bdevs_operational": 4, 00:16:44.408 "base_bdevs_list": [ 00:16:44.408 { 00:16:44.408 "name": "BaseBdev1", 00:16:44.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.408 "is_configured": false, 00:16:44.408 "data_offset": 0, 00:16:44.408 "data_size": 0 00:16:44.408 }, 00:16:44.408 { 00:16:44.408 "name": "BaseBdev2", 00:16:44.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.408 "is_configured": false, 00:16:44.408 "data_offset": 0, 00:16:44.408 "data_size": 0 00:16:44.408 }, 00:16:44.408 { 00:16:44.408 "name": "BaseBdev3", 00:16:44.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.408 "is_configured": false, 00:16:44.408 "data_offset": 0, 00:16:44.408 "data_size": 0 00:16:44.408 }, 00:16:44.408 { 00:16:44.408 "name": "BaseBdev4", 00:16:44.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.408 "is_configured": false, 00:16:44.408 "data_offset": 0, 00:16:44.408 "data_size": 0 00:16:44.408 } 00:16:44.408 ] 00:16:44.408 }' 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.408 04:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.979 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:44.979 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.979 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.979 [2024-11-27 04:38:32.325026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.979 [2024-11-27 04:38:32.325074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:44.979 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.979 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:44.979 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.979 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.979 [2024-11-27 04:38:32.337014] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.979 [2024-11-27 04:38:32.337206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.979 [2024-11-27 04:38:32.337358] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.979 [2024-11-27 04:38:32.337411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.979 [2024-11-27 04:38:32.337435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:44.980 [2024-11-27 04:38:32.337465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:44.980 [2024-11-27 04:38:32.337488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:44.980 [2024-11-27 04:38:32.337515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.980 [2024-11-27 04:38:32.382831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.980 BaseBdev1 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.980 [ 00:16:44.980 { 00:16:44.980 "name": "BaseBdev1", 00:16:44.980 "aliases": [ 00:16:44.980 "8ef94537-d93a-4139-a99f-a94e7155a192" 00:16:44.980 ], 00:16:44.980 "product_name": "Malloc disk", 00:16:44.980 "block_size": 512, 00:16:44.980 "num_blocks": 65536, 00:16:44.980 "uuid": "8ef94537-d93a-4139-a99f-a94e7155a192", 00:16:44.980 "assigned_rate_limits": { 00:16:44.980 "rw_ios_per_sec": 0, 00:16:44.980 "rw_mbytes_per_sec": 0, 00:16:44.980 "r_mbytes_per_sec": 0, 00:16:44.980 "w_mbytes_per_sec": 0 00:16:44.980 }, 00:16:44.980 "claimed": true, 00:16:44.980 "claim_type": "exclusive_write", 00:16:44.980 "zoned": false, 00:16:44.980 "supported_io_types": { 00:16:44.980 "read": true, 00:16:44.980 "write": true, 00:16:44.980 "unmap": true, 00:16:44.980 "flush": true, 00:16:44.980 "reset": true, 00:16:44.980 "nvme_admin": false, 00:16:44.980 "nvme_io": false, 00:16:44.980 "nvme_io_md": false, 00:16:44.980 "write_zeroes": true, 00:16:44.980 "zcopy": true, 00:16:44.980 "get_zone_info": false, 00:16:44.980 "zone_management": false, 00:16:44.980 "zone_append": false, 00:16:44.980 "compare": false, 00:16:44.980 "compare_and_write": false, 00:16:44.980 "abort": true, 00:16:44.980 "seek_hole": false, 00:16:44.980 "seek_data": false, 00:16:44.980 "copy": true, 00:16:44.980 "nvme_iov_md": false 00:16:44.980 }, 00:16:44.980 "memory_domains": [ 00:16:44.980 { 00:16:44.980 "dma_device_id": "system", 00:16:44.980 "dma_device_type": 1 00:16:44.980 }, 00:16:44.980 { 00:16:44.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.980 "dma_device_type": 2 00:16:44.980 } 00:16:44.980 ], 00:16:44.980 "driver_specific": {} 00:16:44.980 } 00:16:44.980 ] 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.980 "name": "Existed_Raid", 00:16:44.980 "uuid": "425a3894-4363-482a-bd7f-5015fbd13cce", 00:16:44.980 "strip_size_kb": 0, 00:16:44.980 "state": "configuring", 00:16:44.980 "raid_level": "raid1", 00:16:44.980 "superblock": true, 00:16:44.980 "num_base_bdevs": 4, 00:16:44.980 "num_base_bdevs_discovered": 1, 00:16:44.980 "num_base_bdevs_operational": 4, 00:16:44.980 "base_bdevs_list": [ 00:16:44.980 { 00:16:44.980 "name": "BaseBdev1", 00:16:44.980 "uuid": "8ef94537-d93a-4139-a99f-a94e7155a192", 00:16:44.980 "is_configured": true, 00:16:44.980 "data_offset": 2048, 00:16:44.980 "data_size": 63488 00:16:44.980 }, 00:16:44.980 { 00:16:44.980 "name": "BaseBdev2", 00:16:44.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.980 "is_configured": false, 00:16:44.980 "data_offset": 0, 00:16:44.980 "data_size": 0 00:16:44.980 }, 00:16:44.980 { 00:16:44.980 "name": "BaseBdev3", 00:16:44.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.980 "is_configured": false, 00:16:44.980 "data_offset": 0, 00:16:44.980 "data_size": 0 00:16:44.980 }, 00:16:44.980 { 00:16:44.980 "name": "BaseBdev4", 00:16:44.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.980 "is_configured": false, 00:16:44.980 "data_offset": 0, 00:16:44.980 "data_size": 0 00:16:44.980 } 00:16:44.980 ] 00:16:44.980 }' 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.980 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.552 [2024-11-27 04:38:32.915030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.552 [2024-11-27 04:38:32.915233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.552 [2024-11-27 04:38:32.923080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.552 [2024-11-27 04:38:32.925465] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.552 [2024-11-27 04:38:32.925519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.552 [2024-11-27 04:38:32.925534] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.552 [2024-11-27 04:38:32.925551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.552 [2024-11-27 04:38:32.925561] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:45.552 [2024-11-27 04:38:32.925574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.552 "name": "Existed_Raid", 00:16:45.552 "uuid": "667dc049-a1be-4ea6-a031-a96729ddb8d4", 00:16:45.552 "strip_size_kb": 0, 00:16:45.552 "state": "configuring", 00:16:45.552 "raid_level": "raid1", 00:16:45.552 "superblock": true, 00:16:45.552 "num_base_bdevs": 4, 00:16:45.552 "num_base_bdevs_discovered": 1, 00:16:45.552 "num_base_bdevs_operational": 4, 00:16:45.552 "base_bdevs_list": [ 00:16:45.552 { 00:16:45.552 "name": "BaseBdev1", 00:16:45.552 "uuid": "8ef94537-d93a-4139-a99f-a94e7155a192", 00:16:45.552 "is_configured": true, 00:16:45.552 "data_offset": 2048, 00:16:45.552 "data_size": 63488 00:16:45.552 }, 00:16:45.552 { 00:16:45.552 "name": "BaseBdev2", 00:16:45.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.552 "is_configured": false, 00:16:45.552 "data_offset": 0, 00:16:45.552 "data_size": 0 00:16:45.552 }, 00:16:45.552 { 00:16:45.552 "name": "BaseBdev3", 00:16:45.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.552 "is_configured": false, 00:16:45.552 "data_offset": 0, 00:16:45.552 "data_size": 0 00:16:45.552 }, 00:16:45.552 { 00:16:45.552 "name": "BaseBdev4", 00:16:45.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.552 "is_configured": false, 00:16:45.552 "data_offset": 0, 00:16:45.552 "data_size": 0 00:16:45.552 } 00:16:45.552 ] 00:16:45.552 }' 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.552 04:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.119 [2024-11-27 04:38:33.474424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.119 BaseBdev2 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.119 [ 00:16:46.119 { 00:16:46.119 "name": "BaseBdev2", 00:16:46.119 "aliases": [ 00:16:46.119 "1b02ac7e-1977-4eff-9925-d680e0d1639c" 00:16:46.119 ], 00:16:46.119 "product_name": "Malloc disk", 00:16:46.119 "block_size": 512, 00:16:46.119 "num_blocks": 65536, 00:16:46.119 "uuid": "1b02ac7e-1977-4eff-9925-d680e0d1639c", 00:16:46.119 "assigned_rate_limits": { 00:16:46.119 "rw_ios_per_sec": 0, 00:16:46.119 "rw_mbytes_per_sec": 0, 00:16:46.119 "r_mbytes_per_sec": 0, 00:16:46.119 "w_mbytes_per_sec": 0 00:16:46.119 }, 00:16:46.119 "claimed": true, 00:16:46.119 "claim_type": "exclusive_write", 00:16:46.119 "zoned": false, 00:16:46.119 "supported_io_types": { 00:16:46.119 "read": true, 00:16:46.119 "write": true, 00:16:46.119 "unmap": true, 00:16:46.119 "flush": true, 00:16:46.119 "reset": true, 00:16:46.119 "nvme_admin": false, 00:16:46.119 "nvme_io": false, 00:16:46.119 "nvme_io_md": false, 00:16:46.119 "write_zeroes": true, 00:16:46.119 "zcopy": true, 00:16:46.119 "get_zone_info": false, 00:16:46.119 "zone_management": false, 00:16:46.119 "zone_append": false, 00:16:46.119 "compare": false, 00:16:46.119 "compare_and_write": false, 00:16:46.119 "abort": true, 00:16:46.119 "seek_hole": false, 00:16:46.119 "seek_data": false, 00:16:46.119 "copy": true, 00:16:46.119 "nvme_iov_md": false 00:16:46.119 }, 00:16:46.119 "memory_domains": [ 00:16:46.119 { 00:16:46.119 "dma_device_id": "system", 00:16:46.119 "dma_device_type": 1 00:16:46.119 }, 00:16:46.119 { 00:16:46.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.119 "dma_device_type": 2 00:16:46.119 } 00:16:46.119 ], 00:16:46.119 "driver_specific": {} 00:16:46.119 } 00:16:46.119 ] 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.119 "name": "Existed_Raid", 00:16:46.119 "uuid": "667dc049-a1be-4ea6-a031-a96729ddb8d4", 00:16:46.119 "strip_size_kb": 0, 00:16:46.119 "state": "configuring", 00:16:46.119 "raid_level": "raid1", 00:16:46.119 "superblock": true, 00:16:46.119 "num_base_bdevs": 4, 00:16:46.119 "num_base_bdevs_discovered": 2, 00:16:46.119 "num_base_bdevs_operational": 4, 00:16:46.119 "base_bdevs_list": [ 00:16:46.119 { 00:16:46.119 "name": "BaseBdev1", 00:16:46.119 "uuid": "8ef94537-d93a-4139-a99f-a94e7155a192", 00:16:46.119 "is_configured": true, 00:16:46.119 "data_offset": 2048, 00:16:46.119 "data_size": 63488 00:16:46.119 }, 00:16:46.119 { 00:16:46.119 "name": "BaseBdev2", 00:16:46.119 "uuid": "1b02ac7e-1977-4eff-9925-d680e0d1639c", 00:16:46.119 "is_configured": true, 00:16:46.119 "data_offset": 2048, 00:16:46.119 "data_size": 63488 00:16:46.119 }, 00:16:46.119 { 00:16:46.119 "name": "BaseBdev3", 00:16:46.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.119 "is_configured": false, 00:16:46.119 "data_offset": 0, 00:16:46.119 "data_size": 0 00:16:46.119 }, 00:16:46.119 { 00:16:46.119 "name": "BaseBdev4", 00:16:46.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.119 "is_configured": false, 00:16:46.119 "data_offset": 0, 00:16:46.119 "data_size": 0 00:16:46.119 } 00:16:46.119 ] 00:16:46.119 }' 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.119 04:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.687 [2024-11-27 04:38:34.073979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.687 BaseBdev3 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.687 [ 00:16:46.687 { 00:16:46.687 "name": "BaseBdev3", 00:16:46.687 "aliases": [ 00:16:46.687 "17908ffb-6632-412b-bb32-1080af2b2e09" 00:16:46.687 ], 00:16:46.687 "product_name": "Malloc disk", 00:16:46.687 "block_size": 512, 00:16:46.687 "num_blocks": 65536, 00:16:46.687 "uuid": "17908ffb-6632-412b-bb32-1080af2b2e09", 00:16:46.687 "assigned_rate_limits": { 00:16:46.687 "rw_ios_per_sec": 0, 00:16:46.687 "rw_mbytes_per_sec": 0, 00:16:46.687 "r_mbytes_per_sec": 0, 00:16:46.687 "w_mbytes_per_sec": 0 00:16:46.687 }, 00:16:46.687 "claimed": true, 00:16:46.687 "claim_type": "exclusive_write", 00:16:46.687 "zoned": false, 00:16:46.687 "supported_io_types": { 00:16:46.687 "read": true, 00:16:46.687 "write": true, 00:16:46.687 "unmap": true, 00:16:46.687 "flush": true, 00:16:46.687 "reset": true, 00:16:46.687 "nvme_admin": false, 00:16:46.687 "nvme_io": false, 00:16:46.687 "nvme_io_md": false, 00:16:46.687 "write_zeroes": true, 00:16:46.687 "zcopy": true, 00:16:46.687 "get_zone_info": false, 00:16:46.687 "zone_management": false, 00:16:46.687 "zone_append": false, 00:16:46.687 "compare": false, 00:16:46.687 "compare_and_write": false, 00:16:46.687 "abort": true, 00:16:46.687 "seek_hole": false, 00:16:46.687 "seek_data": false, 00:16:46.687 "copy": true, 00:16:46.687 "nvme_iov_md": false 00:16:46.687 }, 00:16:46.687 "memory_domains": [ 00:16:46.687 { 00:16:46.687 "dma_device_id": "system", 00:16:46.687 "dma_device_type": 1 00:16:46.687 }, 00:16:46.687 { 00:16:46.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.687 "dma_device_type": 2 00:16:46.687 } 00:16:46.687 ], 00:16:46.687 "driver_specific": {} 00:16:46.687 } 00:16:46.687 ] 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.687 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.687 "name": "Existed_Raid", 00:16:46.687 "uuid": "667dc049-a1be-4ea6-a031-a96729ddb8d4", 00:16:46.687 "strip_size_kb": 0, 00:16:46.687 "state": "configuring", 00:16:46.687 "raid_level": "raid1", 00:16:46.687 "superblock": true, 00:16:46.687 "num_base_bdevs": 4, 00:16:46.687 "num_base_bdevs_discovered": 3, 00:16:46.687 "num_base_bdevs_operational": 4, 00:16:46.687 "base_bdevs_list": [ 00:16:46.687 { 00:16:46.687 "name": "BaseBdev1", 00:16:46.687 "uuid": "8ef94537-d93a-4139-a99f-a94e7155a192", 00:16:46.687 "is_configured": true, 00:16:46.687 "data_offset": 2048, 00:16:46.687 "data_size": 63488 00:16:46.687 }, 00:16:46.687 { 00:16:46.687 "name": "BaseBdev2", 00:16:46.687 "uuid": "1b02ac7e-1977-4eff-9925-d680e0d1639c", 00:16:46.687 "is_configured": true, 00:16:46.688 "data_offset": 2048, 00:16:46.688 "data_size": 63488 00:16:46.688 }, 00:16:46.688 { 00:16:46.688 "name": "BaseBdev3", 00:16:46.688 "uuid": "17908ffb-6632-412b-bb32-1080af2b2e09", 00:16:46.688 "is_configured": true, 00:16:46.688 "data_offset": 2048, 00:16:46.688 "data_size": 63488 00:16:46.688 }, 00:16:46.688 { 00:16:46.688 "name": "BaseBdev4", 00:16:46.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.688 "is_configured": false, 00:16:46.688 "data_offset": 0, 00:16:46.688 "data_size": 0 00:16:46.688 } 00:16:46.688 ] 00:16:46.688 }' 00:16:46.688 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.688 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.254 [2024-11-27 04:38:34.676863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:47.254 [2024-11-27 04:38:34.677221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:47.254 [2024-11-27 04:38:34.677243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:47.254 BaseBdev4 00:16:47.254 [2024-11-27 04:38:34.677583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:47.254 [2024-11-27 04:38:34.677810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:47.254 [2024-11-27 04:38:34.677832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:47.254 [2024-11-27 04:38:34.678034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.254 [ 00:16:47.254 { 00:16:47.254 "name": "BaseBdev4", 00:16:47.254 "aliases": [ 00:16:47.254 "c25b516e-a55c-4824-b373-6ccc9cdd9f46" 00:16:47.254 ], 00:16:47.254 "product_name": "Malloc disk", 00:16:47.254 "block_size": 512, 00:16:47.254 "num_blocks": 65536, 00:16:47.254 "uuid": "c25b516e-a55c-4824-b373-6ccc9cdd9f46", 00:16:47.254 "assigned_rate_limits": { 00:16:47.254 "rw_ios_per_sec": 0, 00:16:47.254 "rw_mbytes_per_sec": 0, 00:16:47.254 "r_mbytes_per_sec": 0, 00:16:47.254 "w_mbytes_per_sec": 0 00:16:47.254 }, 00:16:47.254 "claimed": true, 00:16:47.254 "claim_type": "exclusive_write", 00:16:47.254 "zoned": false, 00:16:47.254 "supported_io_types": { 00:16:47.254 "read": true, 00:16:47.254 "write": true, 00:16:47.254 "unmap": true, 00:16:47.254 "flush": true, 00:16:47.254 "reset": true, 00:16:47.254 "nvme_admin": false, 00:16:47.254 "nvme_io": false, 00:16:47.254 "nvme_io_md": false, 00:16:47.254 "write_zeroes": true, 00:16:47.254 "zcopy": true, 00:16:47.254 "get_zone_info": false, 00:16:47.254 "zone_management": false, 00:16:47.254 "zone_append": false, 00:16:47.254 "compare": false, 00:16:47.254 "compare_and_write": false, 00:16:47.254 "abort": true, 00:16:47.254 "seek_hole": false, 00:16:47.254 "seek_data": false, 00:16:47.254 "copy": true, 00:16:47.254 "nvme_iov_md": false 00:16:47.254 }, 00:16:47.254 "memory_domains": [ 00:16:47.254 { 00:16:47.254 "dma_device_id": "system", 00:16:47.254 "dma_device_type": 1 00:16:47.254 }, 00:16:47.254 { 00:16:47.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.254 "dma_device_type": 2 00:16:47.254 } 00:16:47.254 ], 00:16:47.254 "driver_specific": {} 00:16:47.254 } 00:16:47.254 ] 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.254 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.255 "name": "Existed_Raid", 00:16:47.255 "uuid": "667dc049-a1be-4ea6-a031-a96729ddb8d4", 00:16:47.255 "strip_size_kb": 0, 00:16:47.255 "state": "online", 00:16:47.255 "raid_level": "raid1", 00:16:47.255 "superblock": true, 00:16:47.255 "num_base_bdevs": 4, 00:16:47.255 "num_base_bdevs_discovered": 4, 00:16:47.255 "num_base_bdevs_operational": 4, 00:16:47.255 "base_bdevs_list": [ 00:16:47.255 { 00:16:47.255 "name": "BaseBdev1", 00:16:47.255 "uuid": "8ef94537-d93a-4139-a99f-a94e7155a192", 00:16:47.255 "is_configured": true, 00:16:47.255 "data_offset": 2048, 00:16:47.255 "data_size": 63488 00:16:47.255 }, 00:16:47.255 { 00:16:47.255 "name": "BaseBdev2", 00:16:47.255 "uuid": "1b02ac7e-1977-4eff-9925-d680e0d1639c", 00:16:47.255 "is_configured": true, 00:16:47.255 "data_offset": 2048, 00:16:47.255 "data_size": 63488 00:16:47.255 }, 00:16:47.255 { 00:16:47.255 "name": "BaseBdev3", 00:16:47.255 "uuid": "17908ffb-6632-412b-bb32-1080af2b2e09", 00:16:47.255 "is_configured": true, 00:16:47.255 "data_offset": 2048, 00:16:47.255 "data_size": 63488 00:16:47.255 }, 00:16:47.255 { 00:16:47.255 "name": "BaseBdev4", 00:16:47.255 "uuid": "c25b516e-a55c-4824-b373-6ccc9cdd9f46", 00:16:47.255 "is_configured": true, 00:16:47.255 "data_offset": 2048, 00:16:47.255 "data_size": 63488 00:16:47.255 } 00:16:47.255 ] 00:16:47.255 }' 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.255 04:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.820 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.821 [2024-11-27 04:38:35.261562] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:47.821 "name": "Existed_Raid", 00:16:47.821 "aliases": [ 00:16:47.821 "667dc049-a1be-4ea6-a031-a96729ddb8d4" 00:16:47.821 ], 00:16:47.821 "product_name": "Raid Volume", 00:16:47.821 "block_size": 512, 00:16:47.821 "num_blocks": 63488, 00:16:47.821 "uuid": "667dc049-a1be-4ea6-a031-a96729ddb8d4", 00:16:47.821 "assigned_rate_limits": { 00:16:47.821 "rw_ios_per_sec": 0, 00:16:47.821 "rw_mbytes_per_sec": 0, 00:16:47.821 "r_mbytes_per_sec": 0, 00:16:47.821 "w_mbytes_per_sec": 0 00:16:47.821 }, 00:16:47.821 "claimed": false, 00:16:47.821 "zoned": false, 00:16:47.821 "supported_io_types": { 00:16:47.821 "read": true, 00:16:47.821 "write": true, 00:16:47.821 "unmap": false, 00:16:47.821 "flush": false, 00:16:47.821 "reset": true, 00:16:47.821 "nvme_admin": false, 00:16:47.821 "nvme_io": false, 00:16:47.821 "nvme_io_md": false, 00:16:47.821 "write_zeroes": true, 00:16:47.821 "zcopy": false, 00:16:47.821 "get_zone_info": false, 00:16:47.821 "zone_management": false, 00:16:47.821 "zone_append": false, 00:16:47.821 "compare": false, 00:16:47.821 "compare_and_write": false, 00:16:47.821 "abort": false, 00:16:47.821 "seek_hole": false, 00:16:47.821 "seek_data": false, 00:16:47.821 "copy": false, 00:16:47.821 "nvme_iov_md": false 00:16:47.821 }, 00:16:47.821 "memory_domains": [ 00:16:47.821 { 00:16:47.821 "dma_device_id": "system", 00:16:47.821 "dma_device_type": 1 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.821 "dma_device_type": 2 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "dma_device_id": "system", 00:16:47.821 "dma_device_type": 1 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.821 "dma_device_type": 2 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "dma_device_id": "system", 00:16:47.821 "dma_device_type": 1 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.821 "dma_device_type": 2 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "dma_device_id": "system", 00:16:47.821 "dma_device_type": 1 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.821 "dma_device_type": 2 00:16:47.821 } 00:16:47.821 ], 00:16:47.821 "driver_specific": { 00:16:47.821 "raid": { 00:16:47.821 "uuid": "667dc049-a1be-4ea6-a031-a96729ddb8d4", 00:16:47.821 "strip_size_kb": 0, 00:16:47.821 "state": "online", 00:16:47.821 "raid_level": "raid1", 00:16:47.821 "superblock": true, 00:16:47.821 "num_base_bdevs": 4, 00:16:47.821 "num_base_bdevs_discovered": 4, 00:16:47.821 "num_base_bdevs_operational": 4, 00:16:47.821 "base_bdevs_list": [ 00:16:47.821 { 00:16:47.821 "name": "BaseBdev1", 00:16:47.821 "uuid": "8ef94537-d93a-4139-a99f-a94e7155a192", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 2048, 00:16:47.821 "data_size": 63488 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "name": "BaseBdev2", 00:16:47.821 "uuid": "1b02ac7e-1977-4eff-9925-d680e0d1639c", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 2048, 00:16:47.821 "data_size": 63488 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "name": "BaseBdev3", 00:16:47.821 "uuid": "17908ffb-6632-412b-bb32-1080af2b2e09", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 2048, 00:16:47.821 "data_size": 63488 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "name": "BaseBdev4", 00:16:47.821 "uuid": "c25b516e-a55c-4824-b373-6ccc9cdd9f46", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 2048, 00:16:47.821 "data_size": 63488 00:16:47.821 } 00:16:47.821 ] 00:16:47.821 } 00:16:47.821 } 00:16:47.821 }' 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:47.821 BaseBdev2 00:16:47.821 BaseBdev3 00:16:47.821 BaseBdev4' 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.821 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.079 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 [2024-11-27 04:38:35.633301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.337 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.338 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.338 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.338 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.338 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.338 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.338 "name": "Existed_Raid", 00:16:48.338 "uuid": "667dc049-a1be-4ea6-a031-a96729ddb8d4", 00:16:48.338 "strip_size_kb": 0, 00:16:48.338 "state": "online", 00:16:48.338 "raid_level": "raid1", 00:16:48.338 "superblock": true, 00:16:48.338 "num_base_bdevs": 4, 00:16:48.338 "num_base_bdevs_discovered": 3, 00:16:48.338 "num_base_bdevs_operational": 3, 00:16:48.338 "base_bdevs_list": [ 00:16:48.338 { 00:16:48.338 "name": null, 00:16:48.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.338 "is_configured": false, 00:16:48.338 "data_offset": 0, 00:16:48.338 "data_size": 63488 00:16:48.338 }, 00:16:48.338 { 00:16:48.338 "name": "BaseBdev2", 00:16:48.338 "uuid": "1b02ac7e-1977-4eff-9925-d680e0d1639c", 00:16:48.338 "is_configured": true, 00:16:48.338 "data_offset": 2048, 00:16:48.338 "data_size": 63488 00:16:48.338 }, 00:16:48.338 { 00:16:48.338 "name": "BaseBdev3", 00:16:48.338 "uuid": "17908ffb-6632-412b-bb32-1080af2b2e09", 00:16:48.338 "is_configured": true, 00:16:48.338 "data_offset": 2048, 00:16:48.338 "data_size": 63488 00:16:48.338 }, 00:16:48.338 { 00:16:48.338 "name": "BaseBdev4", 00:16:48.338 "uuid": "c25b516e-a55c-4824-b373-6ccc9cdd9f46", 00:16:48.338 "is_configured": true, 00:16:48.338 "data_offset": 2048, 00:16:48.338 "data_size": 63488 00:16:48.338 } 00:16:48.338 ] 00:16:48.338 }' 00:16:48.338 04:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.338 04:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 [2024-11-27 04:38:36.301331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.905 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.905 [2024-11-27 04:38:36.454984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.163 [2024-11-27 04:38:36.600420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:49.163 [2024-11-27 04:38:36.600562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.163 [2024-11-27 04:38:36.686937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.163 [2024-11-27 04:38:36.687016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.163 [2024-11-27 04:38:36.687037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.163 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.422 BaseBdev2 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.422 [ 00:16:49.422 { 00:16:49.422 "name": "BaseBdev2", 00:16:49.422 "aliases": [ 00:16:49.422 "0cd89f47-c726-4ec2-9a7c-fd4045c445ca" 00:16:49.422 ], 00:16:49.422 "product_name": "Malloc disk", 00:16:49.422 "block_size": 512, 00:16:49.422 "num_blocks": 65536, 00:16:49.422 "uuid": "0cd89f47-c726-4ec2-9a7c-fd4045c445ca", 00:16:49.422 "assigned_rate_limits": { 00:16:49.422 "rw_ios_per_sec": 0, 00:16:49.422 "rw_mbytes_per_sec": 0, 00:16:49.422 "r_mbytes_per_sec": 0, 00:16:49.422 "w_mbytes_per_sec": 0 00:16:49.422 }, 00:16:49.422 "claimed": false, 00:16:49.422 "zoned": false, 00:16:49.422 "supported_io_types": { 00:16:49.422 "read": true, 00:16:49.422 "write": true, 00:16:49.422 "unmap": true, 00:16:49.422 "flush": true, 00:16:49.422 "reset": true, 00:16:49.422 "nvme_admin": false, 00:16:49.422 "nvme_io": false, 00:16:49.422 "nvme_io_md": false, 00:16:49.422 "write_zeroes": true, 00:16:49.422 "zcopy": true, 00:16:49.422 "get_zone_info": false, 00:16:49.422 "zone_management": false, 00:16:49.422 "zone_append": false, 00:16:49.422 "compare": false, 00:16:49.422 "compare_and_write": false, 00:16:49.422 "abort": true, 00:16:49.422 "seek_hole": false, 00:16:49.422 "seek_data": false, 00:16:49.422 "copy": true, 00:16:49.422 "nvme_iov_md": false 00:16:49.422 }, 00:16:49.422 "memory_domains": [ 00:16:49.422 { 00:16:49.422 "dma_device_id": "system", 00:16:49.422 "dma_device_type": 1 00:16:49.422 }, 00:16:49.422 { 00:16:49.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.422 "dma_device_type": 2 00:16:49.422 } 00:16:49.422 ], 00:16:49.422 "driver_specific": {} 00:16:49.422 } 00:16:49.422 ] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.422 BaseBdev3 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.422 [ 00:16:49.422 { 00:16:49.422 "name": "BaseBdev3", 00:16:49.422 "aliases": [ 00:16:49.422 "8adbb19a-5d13-4433-8fa2-1a9e63c37046" 00:16:49.422 ], 00:16:49.422 "product_name": "Malloc disk", 00:16:49.422 "block_size": 512, 00:16:49.422 "num_blocks": 65536, 00:16:49.422 "uuid": "8adbb19a-5d13-4433-8fa2-1a9e63c37046", 00:16:49.422 "assigned_rate_limits": { 00:16:49.422 "rw_ios_per_sec": 0, 00:16:49.422 "rw_mbytes_per_sec": 0, 00:16:49.422 "r_mbytes_per_sec": 0, 00:16:49.422 "w_mbytes_per_sec": 0 00:16:49.422 }, 00:16:49.422 "claimed": false, 00:16:49.422 "zoned": false, 00:16:49.422 "supported_io_types": { 00:16:49.422 "read": true, 00:16:49.422 "write": true, 00:16:49.422 "unmap": true, 00:16:49.422 "flush": true, 00:16:49.422 "reset": true, 00:16:49.422 "nvme_admin": false, 00:16:49.422 "nvme_io": false, 00:16:49.422 "nvme_io_md": false, 00:16:49.422 "write_zeroes": true, 00:16:49.422 "zcopy": true, 00:16:49.422 "get_zone_info": false, 00:16:49.422 "zone_management": false, 00:16:49.422 "zone_append": false, 00:16:49.422 "compare": false, 00:16:49.422 "compare_and_write": false, 00:16:49.422 "abort": true, 00:16:49.422 "seek_hole": false, 00:16:49.422 "seek_data": false, 00:16:49.422 "copy": true, 00:16:49.422 "nvme_iov_md": false 00:16:49.422 }, 00:16:49.422 "memory_domains": [ 00:16:49.422 { 00:16:49.422 "dma_device_id": "system", 00:16:49.422 "dma_device_type": 1 00:16:49.422 }, 00:16:49.422 { 00:16:49.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.422 "dma_device_type": 2 00:16:49.422 } 00:16:49.422 ], 00:16:49.422 "driver_specific": {} 00:16:49.422 } 00:16:49.422 ] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.422 BaseBdev4 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.422 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.423 [ 00:16:49.423 { 00:16:49.423 "name": "BaseBdev4", 00:16:49.423 "aliases": [ 00:16:49.423 "694eff64-e418-4cb1-ae54-050758916e8d" 00:16:49.423 ], 00:16:49.423 "product_name": "Malloc disk", 00:16:49.423 "block_size": 512, 00:16:49.423 "num_blocks": 65536, 00:16:49.423 "uuid": "694eff64-e418-4cb1-ae54-050758916e8d", 00:16:49.423 "assigned_rate_limits": { 00:16:49.423 "rw_ios_per_sec": 0, 00:16:49.423 "rw_mbytes_per_sec": 0, 00:16:49.423 "r_mbytes_per_sec": 0, 00:16:49.423 "w_mbytes_per_sec": 0 00:16:49.423 }, 00:16:49.423 "claimed": false, 00:16:49.423 "zoned": false, 00:16:49.423 "supported_io_types": { 00:16:49.423 "read": true, 00:16:49.423 "write": true, 00:16:49.423 "unmap": true, 00:16:49.423 "flush": true, 00:16:49.423 "reset": true, 00:16:49.423 "nvme_admin": false, 00:16:49.423 "nvme_io": false, 00:16:49.423 "nvme_io_md": false, 00:16:49.423 "write_zeroes": true, 00:16:49.423 "zcopy": true, 00:16:49.423 "get_zone_info": false, 00:16:49.423 "zone_management": false, 00:16:49.423 "zone_append": false, 00:16:49.423 "compare": false, 00:16:49.423 "compare_and_write": false, 00:16:49.423 "abort": true, 00:16:49.423 "seek_hole": false, 00:16:49.423 "seek_data": false, 00:16:49.423 "copy": true, 00:16:49.423 "nvme_iov_md": false 00:16:49.423 }, 00:16:49.423 "memory_domains": [ 00:16:49.423 { 00:16:49.423 "dma_device_id": "system", 00:16:49.423 "dma_device_type": 1 00:16:49.423 }, 00:16:49.423 { 00:16:49.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.423 "dma_device_type": 2 00:16:49.423 } 00:16:49.423 ], 00:16:49.423 "driver_specific": {} 00:16:49.423 } 00:16:49.423 ] 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.423 04:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.423 [2024-11-27 04:38:36.999693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.423 [2024-11-27 04:38:36.999754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.423 [2024-11-27 04:38:36.999803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.423 [2024-11-27 04:38:37.002264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.423 [2024-11-27 04:38:37.002331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.423 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.682 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.682 "name": "Existed_Raid", 00:16:49.682 "uuid": "36393386-6b4b-42f9-b3c4-7ec8cbe38c70", 00:16:49.682 "strip_size_kb": 0, 00:16:49.682 "state": "configuring", 00:16:49.682 "raid_level": "raid1", 00:16:49.682 "superblock": true, 00:16:49.682 "num_base_bdevs": 4, 00:16:49.682 "num_base_bdevs_discovered": 3, 00:16:49.682 "num_base_bdevs_operational": 4, 00:16:49.682 "base_bdevs_list": [ 00:16:49.682 { 00:16:49.682 "name": "BaseBdev1", 00:16:49.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.682 "is_configured": false, 00:16:49.682 "data_offset": 0, 00:16:49.682 "data_size": 0 00:16:49.682 }, 00:16:49.682 { 00:16:49.682 "name": "BaseBdev2", 00:16:49.682 "uuid": "0cd89f47-c726-4ec2-9a7c-fd4045c445ca", 00:16:49.682 "is_configured": true, 00:16:49.682 "data_offset": 2048, 00:16:49.682 "data_size": 63488 00:16:49.682 }, 00:16:49.682 { 00:16:49.682 "name": "BaseBdev3", 00:16:49.682 "uuid": "8adbb19a-5d13-4433-8fa2-1a9e63c37046", 00:16:49.682 "is_configured": true, 00:16:49.682 "data_offset": 2048, 00:16:49.682 "data_size": 63488 00:16:49.682 }, 00:16:49.682 { 00:16:49.682 "name": "BaseBdev4", 00:16:49.682 "uuid": "694eff64-e418-4cb1-ae54-050758916e8d", 00:16:49.682 "is_configured": true, 00:16:49.682 "data_offset": 2048, 00:16:49.682 "data_size": 63488 00:16:49.682 } 00:16:49.682 ] 00:16:49.682 }' 00:16:49.682 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.682 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.940 [2024-11-27 04:38:37.507824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.940 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.198 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.198 "name": "Existed_Raid", 00:16:50.198 "uuid": "36393386-6b4b-42f9-b3c4-7ec8cbe38c70", 00:16:50.198 "strip_size_kb": 0, 00:16:50.198 "state": "configuring", 00:16:50.198 "raid_level": "raid1", 00:16:50.198 "superblock": true, 00:16:50.198 "num_base_bdevs": 4, 00:16:50.198 "num_base_bdevs_discovered": 2, 00:16:50.198 "num_base_bdevs_operational": 4, 00:16:50.198 "base_bdevs_list": [ 00:16:50.198 { 00:16:50.198 "name": "BaseBdev1", 00:16:50.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.198 "is_configured": false, 00:16:50.198 "data_offset": 0, 00:16:50.198 "data_size": 0 00:16:50.198 }, 00:16:50.198 { 00:16:50.198 "name": null, 00:16:50.198 "uuid": "0cd89f47-c726-4ec2-9a7c-fd4045c445ca", 00:16:50.198 "is_configured": false, 00:16:50.198 "data_offset": 0, 00:16:50.198 "data_size": 63488 00:16:50.198 }, 00:16:50.198 { 00:16:50.198 "name": "BaseBdev3", 00:16:50.198 "uuid": "8adbb19a-5d13-4433-8fa2-1a9e63c37046", 00:16:50.198 "is_configured": true, 00:16:50.198 "data_offset": 2048, 00:16:50.198 "data_size": 63488 00:16:50.198 }, 00:16:50.198 { 00:16:50.198 "name": "BaseBdev4", 00:16:50.198 "uuid": "694eff64-e418-4cb1-ae54-050758916e8d", 00:16:50.198 "is_configured": true, 00:16:50.198 "data_offset": 2048, 00:16:50.198 "data_size": 63488 00:16:50.198 } 00:16:50.198 ] 00:16:50.198 }' 00:16:50.198 04:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.198 04:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.457 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:50.457 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.457 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.457 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.457 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.457 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:50.457 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:50.457 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.457 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.715 [2024-11-27 04:38:38.089428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.715 BaseBdev1 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.715 [ 00:16:50.715 { 00:16:50.715 "name": "BaseBdev1", 00:16:50.715 "aliases": [ 00:16:50.715 "0519ca30-e776-4cd4-92b9-6fbf909d6fa4" 00:16:50.715 ], 00:16:50.715 "product_name": "Malloc disk", 00:16:50.715 "block_size": 512, 00:16:50.715 "num_blocks": 65536, 00:16:50.715 "uuid": "0519ca30-e776-4cd4-92b9-6fbf909d6fa4", 00:16:50.715 "assigned_rate_limits": { 00:16:50.715 "rw_ios_per_sec": 0, 00:16:50.715 "rw_mbytes_per_sec": 0, 00:16:50.715 "r_mbytes_per_sec": 0, 00:16:50.715 "w_mbytes_per_sec": 0 00:16:50.715 }, 00:16:50.715 "claimed": true, 00:16:50.715 "claim_type": "exclusive_write", 00:16:50.715 "zoned": false, 00:16:50.715 "supported_io_types": { 00:16:50.715 "read": true, 00:16:50.715 "write": true, 00:16:50.715 "unmap": true, 00:16:50.715 "flush": true, 00:16:50.715 "reset": true, 00:16:50.715 "nvme_admin": false, 00:16:50.715 "nvme_io": false, 00:16:50.715 "nvme_io_md": false, 00:16:50.715 "write_zeroes": true, 00:16:50.715 "zcopy": true, 00:16:50.715 "get_zone_info": false, 00:16:50.715 "zone_management": false, 00:16:50.715 "zone_append": false, 00:16:50.715 "compare": false, 00:16:50.715 "compare_and_write": false, 00:16:50.715 "abort": true, 00:16:50.715 "seek_hole": false, 00:16:50.715 "seek_data": false, 00:16:50.715 "copy": true, 00:16:50.715 "nvme_iov_md": false 00:16:50.715 }, 00:16:50.715 "memory_domains": [ 00:16:50.715 { 00:16:50.715 "dma_device_id": "system", 00:16:50.715 "dma_device_type": 1 00:16:50.715 }, 00:16:50.715 { 00:16:50.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.715 "dma_device_type": 2 00:16:50.715 } 00:16:50.715 ], 00:16:50.715 "driver_specific": {} 00:16:50.715 } 00:16:50.715 ] 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.715 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.715 "name": "Existed_Raid", 00:16:50.715 "uuid": "36393386-6b4b-42f9-b3c4-7ec8cbe38c70", 00:16:50.715 "strip_size_kb": 0, 00:16:50.715 "state": "configuring", 00:16:50.715 "raid_level": "raid1", 00:16:50.716 "superblock": true, 00:16:50.716 "num_base_bdevs": 4, 00:16:50.716 "num_base_bdevs_discovered": 3, 00:16:50.716 "num_base_bdevs_operational": 4, 00:16:50.716 "base_bdevs_list": [ 00:16:50.716 { 00:16:50.716 "name": "BaseBdev1", 00:16:50.716 "uuid": "0519ca30-e776-4cd4-92b9-6fbf909d6fa4", 00:16:50.716 "is_configured": true, 00:16:50.716 "data_offset": 2048, 00:16:50.716 "data_size": 63488 00:16:50.716 }, 00:16:50.716 { 00:16:50.716 "name": null, 00:16:50.716 "uuid": "0cd89f47-c726-4ec2-9a7c-fd4045c445ca", 00:16:50.716 "is_configured": false, 00:16:50.716 "data_offset": 0, 00:16:50.716 "data_size": 63488 00:16:50.716 }, 00:16:50.716 { 00:16:50.716 "name": "BaseBdev3", 00:16:50.716 "uuid": "8adbb19a-5d13-4433-8fa2-1a9e63c37046", 00:16:50.716 "is_configured": true, 00:16:50.716 "data_offset": 2048, 00:16:50.716 "data_size": 63488 00:16:50.716 }, 00:16:50.716 { 00:16:50.716 "name": "BaseBdev4", 00:16:50.716 "uuid": "694eff64-e418-4cb1-ae54-050758916e8d", 00:16:50.716 "is_configured": true, 00:16:50.716 "data_offset": 2048, 00:16:50.716 "data_size": 63488 00:16:50.716 } 00:16:50.716 ] 00:16:50.716 }' 00:16:50.716 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.716 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.281 [2024-11-27 04:38:38.669674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.281 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.281 "name": "Existed_Raid", 00:16:51.281 "uuid": "36393386-6b4b-42f9-b3c4-7ec8cbe38c70", 00:16:51.281 "strip_size_kb": 0, 00:16:51.281 "state": "configuring", 00:16:51.282 "raid_level": "raid1", 00:16:51.282 "superblock": true, 00:16:51.282 "num_base_bdevs": 4, 00:16:51.282 "num_base_bdevs_discovered": 2, 00:16:51.282 "num_base_bdevs_operational": 4, 00:16:51.282 "base_bdevs_list": [ 00:16:51.282 { 00:16:51.282 "name": "BaseBdev1", 00:16:51.282 "uuid": "0519ca30-e776-4cd4-92b9-6fbf909d6fa4", 00:16:51.282 "is_configured": true, 00:16:51.282 "data_offset": 2048, 00:16:51.282 "data_size": 63488 00:16:51.282 }, 00:16:51.282 { 00:16:51.282 "name": null, 00:16:51.282 "uuid": "0cd89f47-c726-4ec2-9a7c-fd4045c445ca", 00:16:51.282 "is_configured": false, 00:16:51.282 "data_offset": 0, 00:16:51.282 "data_size": 63488 00:16:51.282 }, 00:16:51.282 { 00:16:51.282 "name": null, 00:16:51.282 "uuid": "8adbb19a-5d13-4433-8fa2-1a9e63c37046", 00:16:51.282 "is_configured": false, 00:16:51.282 "data_offset": 0, 00:16:51.282 "data_size": 63488 00:16:51.282 }, 00:16:51.282 { 00:16:51.282 "name": "BaseBdev4", 00:16:51.282 "uuid": "694eff64-e418-4cb1-ae54-050758916e8d", 00:16:51.282 "is_configured": true, 00:16:51.282 "data_offset": 2048, 00:16:51.282 "data_size": 63488 00:16:51.282 } 00:16:51.282 ] 00:16:51.282 }' 00:16:51.282 04:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.282 04:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.847 [2024-11-27 04:38:39.253780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.847 "name": "Existed_Raid", 00:16:51.847 "uuid": "36393386-6b4b-42f9-b3c4-7ec8cbe38c70", 00:16:51.847 "strip_size_kb": 0, 00:16:51.847 "state": "configuring", 00:16:51.847 "raid_level": "raid1", 00:16:51.847 "superblock": true, 00:16:51.847 "num_base_bdevs": 4, 00:16:51.847 "num_base_bdevs_discovered": 3, 00:16:51.847 "num_base_bdevs_operational": 4, 00:16:51.847 "base_bdevs_list": [ 00:16:51.847 { 00:16:51.847 "name": "BaseBdev1", 00:16:51.847 "uuid": "0519ca30-e776-4cd4-92b9-6fbf909d6fa4", 00:16:51.847 "is_configured": true, 00:16:51.847 "data_offset": 2048, 00:16:51.847 "data_size": 63488 00:16:51.847 }, 00:16:51.847 { 00:16:51.847 "name": null, 00:16:51.847 "uuid": "0cd89f47-c726-4ec2-9a7c-fd4045c445ca", 00:16:51.847 "is_configured": false, 00:16:51.847 "data_offset": 0, 00:16:51.847 "data_size": 63488 00:16:51.847 }, 00:16:51.847 { 00:16:51.847 "name": "BaseBdev3", 00:16:51.847 "uuid": "8adbb19a-5d13-4433-8fa2-1a9e63c37046", 00:16:51.847 "is_configured": true, 00:16:51.847 "data_offset": 2048, 00:16:51.847 "data_size": 63488 00:16:51.847 }, 00:16:51.847 { 00:16:51.847 "name": "BaseBdev4", 00:16:51.847 "uuid": "694eff64-e418-4cb1-ae54-050758916e8d", 00:16:51.847 "is_configured": true, 00:16:51.847 "data_offset": 2048, 00:16:51.847 "data_size": 63488 00:16:51.847 } 00:16:51.847 ] 00:16:51.847 }' 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.847 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.415 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.415 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.415 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.415 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:52.415 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.415 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:52.415 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.416 [2024-11-27 04:38:39.810034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.416 "name": "Existed_Raid", 00:16:52.416 "uuid": "36393386-6b4b-42f9-b3c4-7ec8cbe38c70", 00:16:52.416 "strip_size_kb": 0, 00:16:52.416 "state": "configuring", 00:16:52.416 "raid_level": "raid1", 00:16:52.416 "superblock": true, 00:16:52.416 "num_base_bdevs": 4, 00:16:52.416 "num_base_bdevs_discovered": 2, 00:16:52.416 "num_base_bdevs_operational": 4, 00:16:52.416 "base_bdevs_list": [ 00:16:52.416 { 00:16:52.416 "name": null, 00:16:52.416 "uuid": "0519ca30-e776-4cd4-92b9-6fbf909d6fa4", 00:16:52.416 "is_configured": false, 00:16:52.416 "data_offset": 0, 00:16:52.416 "data_size": 63488 00:16:52.416 }, 00:16:52.416 { 00:16:52.416 "name": null, 00:16:52.416 "uuid": "0cd89f47-c726-4ec2-9a7c-fd4045c445ca", 00:16:52.416 "is_configured": false, 00:16:52.416 "data_offset": 0, 00:16:52.416 "data_size": 63488 00:16:52.416 }, 00:16:52.416 { 00:16:52.416 "name": "BaseBdev3", 00:16:52.416 "uuid": "8adbb19a-5d13-4433-8fa2-1a9e63c37046", 00:16:52.416 "is_configured": true, 00:16:52.416 "data_offset": 2048, 00:16:52.416 "data_size": 63488 00:16:52.416 }, 00:16:52.416 { 00:16:52.416 "name": "BaseBdev4", 00:16:52.416 "uuid": "694eff64-e418-4cb1-ae54-050758916e8d", 00:16:52.416 "is_configured": true, 00:16:52.416 "data_offset": 2048, 00:16:52.416 "data_size": 63488 00:16:52.416 } 00:16:52.416 ] 00:16:52.416 }' 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.416 04:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.982 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.982 04:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.982 04:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.982 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.983 [2024-11-27 04:38:40.514076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.983 "name": "Existed_Raid", 00:16:52.983 "uuid": "36393386-6b4b-42f9-b3c4-7ec8cbe38c70", 00:16:52.983 "strip_size_kb": 0, 00:16:52.983 "state": "configuring", 00:16:52.983 "raid_level": "raid1", 00:16:52.983 "superblock": true, 00:16:52.983 "num_base_bdevs": 4, 00:16:52.983 "num_base_bdevs_discovered": 3, 00:16:52.983 "num_base_bdevs_operational": 4, 00:16:52.983 "base_bdevs_list": [ 00:16:52.983 { 00:16:52.983 "name": null, 00:16:52.983 "uuid": "0519ca30-e776-4cd4-92b9-6fbf909d6fa4", 00:16:52.983 "is_configured": false, 00:16:52.983 "data_offset": 0, 00:16:52.983 "data_size": 63488 00:16:52.983 }, 00:16:52.983 { 00:16:52.983 "name": "BaseBdev2", 00:16:52.983 "uuid": "0cd89f47-c726-4ec2-9a7c-fd4045c445ca", 00:16:52.983 "is_configured": true, 00:16:52.983 "data_offset": 2048, 00:16:52.983 "data_size": 63488 00:16:52.983 }, 00:16:52.983 { 00:16:52.983 "name": "BaseBdev3", 00:16:52.983 "uuid": "8adbb19a-5d13-4433-8fa2-1a9e63c37046", 00:16:52.983 "is_configured": true, 00:16:52.983 "data_offset": 2048, 00:16:52.983 "data_size": 63488 00:16:52.983 }, 00:16:52.983 { 00:16:52.983 "name": "BaseBdev4", 00:16:52.983 "uuid": "694eff64-e418-4cb1-ae54-050758916e8d", 00:16:52.983 "is_configured": true, 00:16:52.983 "data_offset": 2048, 00:16:52.983 "data_size": 63488 00:16:52.983 } 00:16:52.983 ] 00:16:52.983 }' 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.983 04:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.549 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0519ca30-e776-4cd4-92b9-6fbf909d6fa4 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.550 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.808 [2024-11-27 04:38:41.205071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:53.808 [2024-11-27 04:38:41.205602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:53.808 [2024-11-27 04:38:41.205634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:53.808 [2024-11-27 04:38:41.205993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:53.808 [2024-11-27 04:38:41.206202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:53.808 [2024-11-27 04:38:41.206218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:53.808 [2024-11-27 04:38:41.206391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.808 NewBaseBdev 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.808 [ 00:16:53.808 { 00:16:53.808 "name": "NewBaseBdev", 00:16:53.808 "aliases": [ 00:16:53.808 "0519ca30-e776-4cd4-92b9-6fbf909d6fa4" 00:16:53.808 ], 00:16:53.808 "product_name": "Malloc disk", 00:16:53.808 "block_size": 512, 00:16:53.808 "num_blocks": 65536, 00:16:53.808 "uuid": "0519ca30-e776-4cd4-92b9-6fbf909d6fa4", 00:16:53.808 "assigned_rate_limits": { 00:16:53.808 "rw_ios_per_sec": 0, 00:16:53.808 "rw_mbytes_per_sec": 0, 00:16:53.808 "r_mbytes_per_sec": 0, 00:16:53.808 "w_mbytes_per_sec": 0 00:16:53.808 }, 00:16:53.808 "claimed": true, 00:16:53.808 "claim_type": "exclusive_write", 00:16:53.808 "zoned": false, 00:16:53.808 "supported_io_types": { 00:16:53.808 "read": true, 00:16:53.808 "write": true, 00:16:53.808 "unmap": true, 00:16:53.808 "flush": true, 00:16:53.808 "reset": true, 00:16:53.808 "nvme_admin": false, 00:16:53.808 "nvme_io": false, 00:16:53.808 "nvme_io_md": false, 00:16:53.808 "write_zeroes": true, 00:16:53.808 "zcopy": true, 00:16:53.808 "get_zone_info": false, 00:16:53.808 "zone_management": false, 00:16:53.808 "zone_append": false, 00:16:53.808 "compare": false, 00:16:53.808 "compare_and_write": false, 00:16:53.808 "abort": true, 00:16:53.808 "seek_hole": false, 00:16:53.808 "seek_data": false, 00:16:53.808 "copy": true, 00:16:53.808 "nvme_iov_md": false 00:16:53.808 }, 00:16:53.808 "memory_domains": [ 00:16:53.808 { 00:16:53.808 "dma_device_id": "system", 00:16:53.808 "dma_device_type": 1 00:16:53.808 }, 00:16:53.808 { 00:16:53.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.808 "dma_device_type": 2 00:16:53.808 } 00:16:53.808 ], 00:16:53.808 "driver_specific": {} 00:16:53.808 } 00:16:53.808 ] 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.808 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.809 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.809 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.809 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.809 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.809 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.809 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.809 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.809 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.809 "name": "Existed_Raid", 00:16:53.809 "uuid": "36393386-6b4b-42f9-b3c4-7ec8cbe38c70", 00:16:53.809 "strip_size_kb": 0, 00:16:53.809 "state": "online", 00:16:53.809 "raid_level": "raid1", 00:16:53.809 "superblock": true, 00:16:53.809 "num_base_bdevs": 4, 00:16:53.809 "num_base_bdevs_discovered": 4, 00:16:53.809 "num_base_bdevs_operational": 4, 00:16:53.809 "base_bdevs_list": [ 00:16:53.809 { 00:16:53.809 "name": "NewBaseBdev", 00:16:53.809 "uuid": "0519ca30-e776-4cd4-92b9-6fbf909d6fa4", 00:16:53.809 "is_configured": true, 00:16:53.809 "data_offset": 2048, 00:16:53.809 "data_size": 63488 00:16:53.809 }, 00:16:53.809 { 00:16:53.809 "name": "BaseBdev2", 00:16:53.809 "uuid": "0cd89f47-c726-4ec2-9a7c-fd4045c445ca", 00:16:53.809 "is_configured": true, 00:16:53.809 "data_offset": 2048, 00:16:53.809 "data_size": 63488 00:16:53.809 }, 00:16:53.809 { 00:16:53.809 "name": "BaseBdev3", 00:16:53.809 "uuid": "8adbb19a-5d13-4433-8fa2-1a9e63c37046", 00:16:53.809 "is_configured": true, 00:16:53.809 "data_offset": 2048, 00:16:53.809 "data_size": 63488 00:16:53.809 }, 00:16:53.809 { 00:16:53.809 "name": "BaseBdev4", 00:16:53.809 "uuid": "694eff64-e418-4cb1-ae54-050758916e8d", 00:16:53.809 "is_configured": true, 00:16:53.809 "data_offset": 2048, 00:16:53.809 "data_size": 63488 00:16:53.809 } 00:16:53.809 ] 00:16:53.809 }' 00:16:53.809 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.809 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.374 [2024-11-27 04:38:41.729723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.374 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.374 "name": "Existed_Raid", 00:16:54.374 "aliases": [ 00:16:54.374 "36393386-6b4b-42f9-b3c4-7ec8cbe38c70" 00:16:54.374 ], 00:16:54.374 "product_name": "Raid Volume", 00:16:54.374 "block_size": 512, 00:16:54.374 "num_blocks": 63488, 00:16:54.374 "uuid": "36393386-6b4b-42f9-b3c4-7ec8cbe38c70", 00:16:54.374 "assigned_rate_limits": { 00:16:54.374 "rw_ios_per_sec": 0, 00:16:54.374 "rw_mbytes_per_sec": 0, 00:16:54.374 "r_mbytes_per_sec": 0, 00:16:54.374 "w_mbytes_per_sec": 0 00:16:54.374 }, 00:16:54.374 "claimed": false, 00:16:54.374 "zoned": false, 00:16:54.374 "supported_io_types": { 00:16:54.374 "read": true, 00:16:54.374 "write": true, 00:16:54.374 "unmap": false, 00:16:54.374 "flush": false, 00:16:54.374 "reset": true, 00:16:54.374 "nvme_admin": false, 00:16:54.374 "nvme_io": false, 00:16:54.374 "nvme_io_md": false, 00:16:54.374 "write_zeroes": true, 00:16:54.374 "zcopy": false, 00:16:54.374 "get_zone_info": false, 00:16:54.374 "zone_management": false, 00:16:54.374 "zone_append": false, 00:16:54.374 "compare": false, 00:16:54.374 "compare_and_write": false, 00:16:54.374 "abort": false, 00:16:54.374 "seek_hole": false, 00:16:54.374 "seek_data": false, 00:16:54.374 "copy": false, 00:16:54.374 "nvme_iov_md": false 00:16:54.374 }, 00:16:54.374 "memory_domains": [ 00:16:54.374 { 00:16:54.375 "dma_device_id": "system", 00:16:54.375 "dma_device_type": 1 00:16:54.375 }, 00:16:54.375 { 00:16:54.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.375 "dma_device_type": 2 00:16:54.375 }, 00:16:54.375 { 00:16:54.375 "dma_device_id": "system", 00:16:54.375 "dma_device_type": 1 00:16:54.375 }, 00:16:54.375 { 00:16:54.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.375 "dma_device_type": 2 00:16:54.375 }, 00:16:54.375 { 00:16:54.375 "dma_device_id": "system", 00:16:54.375 "dma_device_type": 1 00:16:54.375 }, 00:16:54.375 { 00:16:54.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.375 "dma_device_type": 2 00:16:54.375 }, 00:16:54.375 { 00:16:54.375 "dma_device_id": "system", 00:16:54.375 "dma_device_type": 1 00:16:54.375 }, 00:16:54.375 { 00:16:54.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.375 "dma_device_type": 2 00:16:54.375 } 00:16:54.375 ], 00:16:54.375 "driver_specific": { 00:16:54.375 "raid": { 00:16:54.375 "uuid": "36393386-6b4b-42f9-b3c4-7ec8cbe38c70", 00:16:54.375 "strip_size_kb": 0, 00:16:54.375 "state": "online", 00:16:54.375 "raid_level": "raid1", 00:16:54.375 "superblock": true, 00:16:54.375 "num_base_bdevs": 4, 00:16:54.375 "num_base_bdevs_discovered": 4, 00:16:54.375 "num_base_bdevs_operational": 4, 00:16:54.375 "base_bdevs_list": [ 00:16:54.375 { 00:16:54.375 "name": "NewBaseBdev", 00:16:54.375 "uuid": "0519ca30-e776-4cd4-92b9-6fbf909d6fa4", 00:16:54.375 "is_configured": true, 00:16:54.375 "data_offset": 2048, 00:16:54.375 "data_size": 63488 00:16:54.375 }, 00:16:54.375 { 00:16:54.375 "name": "BaseBdev2", 00:16:54.375 "uuid": "0cd89f47-c726-4ec2-9a7c-fd4045c445ca", 00:16:54.375 "is_configured": true, 00:16:54.375 "data_offset": 2048, 00:16:54.375 "data_size": 63488 00:16:54.375 }, 00:16:54.375 { 00:16:54.375 "name": "BaseBdev3", 00:16:54.375 "uuid": "8adbb19a-5d13-4433-8fa2-1a9e63c37046", 00:16:54.375 "is_configured": true, 00:16:54.375 "data_offset": 2048, 00:16:54.375 "data_size": 63488 00:16:54.375 }, 00:16:54.375 { 00:16:54.375 "name": "BaseBdev4", 00:16:54.375 "uuid": "694eff64-e418-4cb1-ae54-050758916e8d", 00:16:54.375 "is_configured": true, 00:16:54.375 "data_offset": 2048, 00:16:54.375 "data_size": 63488 00:16:54.375 } 00:16:54.375 ] 00:16:54.375 } 00:16:54.375 } 00:16:54.375 }' 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:54.375 BaseBdev2 00:16:54.375 BaseBdev3 00:16:54.375 BaseBdev4' 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.375 04:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.633 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.633 [2024-11-27 04:38:42.081399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.633 [2024-11-27 04:38:42.081544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.633 [2024-11-27 04:38:42.081659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.634 [2024-11-27 04:38:42.082061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.634 [2024-11-27 04:38:42.082085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74112 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74112 ']' 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74112 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74112 00:16:54.634 killing process with pid 74112 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74112' 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74112 00:16:54.634 [2024-11-27 04:38:42.119960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.634 04:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74112 00:16:54.892 [2024-11-27 04:38:42.474043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.278 04:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:56.278 00:16:56.278 real 0m12.805s 00:16:56.278 user 0m21.199s 00:16:56.278 sys 0m1.712s 00:16:56.278 04:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.278 04:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.278 ************************************ 00:16:56.278 END TEST raid_state_function_test_sb 00:16:56.278 ************************************ 00:16:56.278 04:38:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:56.278 04:38:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:56.278 04:38:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.278 04:38:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.278 ************************************ 00:16:56.278 START TEST raid_superblock_test 00:16:56.278 ************************************ 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74794 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74794 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74794 ']' 00:16:56.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.278 04:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.278 [2024-11-27 04:38:43.690956] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:16:56.278 [2024-11-27 04:38:43.691115] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74794 ] 00:16:56.278 [2024-11-27 04:38:43.864270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.536 [2024-11-27 04:38:43.994531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.794 [2024-11-27 04:38:44.197274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.794 [2024-11-27 04:38:44.197353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 malloc1 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 [2024-11-27 04:38:44.783670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.362 [2024-11-27 04:38:44.783746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.362 [2024-11-27 04:38:44.783799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:57.362 [2024-11-27 04:38:44.783817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.362 [2024-11-27 04:38:44.786619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.362 [2024-11-27 04:38:44.786808] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.362 pt1 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 malloc2 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 [2024-11-27 04:38:44.839498] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.362 [2024-11-27 04:38:44.839601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.362 [2024-11-27 04:38:44.839662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:57.362 [2024-11-27 04:38:44.839692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.362 [2024-11-27 04:38:44.843373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.362 [2024-11-27 04:38:44.843419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.362 pt2 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.362 malloc3 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.362 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 [2024-11-27 04:38:44.910852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:57.363 [2024-11-27 04:38:44.910920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.363 [2024-11-27 04:38:44.910955] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:57.363 [2024-11-27 04:38:44.910970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.363 [2024-11-27 04:38:44.913732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.363 [2024-11-27 04:38:44.913937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:57.363 pt3 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 malloc4 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 [2024-11-27 04:38:44.966474] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:57.363 [2024-11-27 04:38:44.966545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.363 [2024-11-27 04:38:44.966577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:57.363 [2024-11-27 04:38:44.966591] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.363 [2024-11-27 04:38:44.969300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.363 [2024-11-27 04:38:44.969337] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:57.363 pt4 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.363 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.363 [2024-11-27 04:38:44.978581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.363 [2024-11-27 04:38:44.981853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.363 [2024-11-27 04:38:44.982007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:57.621 [2024-11-27 04:38:44.982165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:57.621 [2024-11-27 04:38:44.982540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:57.621 [2024-11-27 04:38:44.982575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:57.621 [2024-11-27 04:38:44.983066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:57.621 [2024-11-27 04:38:44.983418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:57.621 [2024-11-27 04:38:44.983457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:57.621 [2024-11-27 04:38:44.983831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.621 04:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.621 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.621 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.621 "name": "raid_bdev1", 00:16:57.621 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:16:57.621 "strip_size_kb": 0, 00:16:57.621 "state": "online", 00:16:57.621 "raid_level": "raid1", 00:16:57.621 "superblock": true, 00:16:57.621 "num_base_bdevs": 4, 00:16:57.621 "num_base_bdevs_discovered": 4, 00:16:57.621 "num_base_bdevs_operational": 4, 00:16:57.621 "base_bdevs_list": [ 00:16:57.621 { 00:16:57.621 "name": "pt1", 00:16:57.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.621 "is_configured": true, 00:16:57.621 "data_offset": 2048, 00:16:57.621 "data_size": 63488 00:16:57.621 }, 00:16:57.621 { 00:16:57.621 "name": "pt2", 00:16:57.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.621 "is_configured": true, 00:16:57.621 "data_offset": 2048, 00:16:57.622 "data_size": 63488 00:16:57.622 }, 00:16:57.622 { 00:16:57.622 "name": "pt3", 00:16:57.622 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.622 "is_configured": true, 00:16:57.622 "data_offset": 2048, 00:16:57.622 "data_size": 63488 00:16:57.622 }, 00:16:57.622 { 00:16:57.622 "name": "pt4", 00:16:57.622 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.622 "is_configured": true, 00:16:57.622 "data_offset": 2048, 00:16:57.622 "data_size": 63488 00:16:57.622 } 00:16:57.622 ] 00:16:57.622 }' 00:16:57.622 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.622 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.187 [2024-11-27 04:38:45.519230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.187 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.187 "name": "raid_bdev1", 00:16:58.188 "aliases": [ 00:16:58.188 "38028778-e31c-4cbc-b8f9-07cefa348fd3" 00:16:58.188 ], 00:16:58.188 "product_name": "Raid Volume", 00:16:58.188 "block_size": 512, 00:16:58.188 "num_blocks": 63488, 00:16:58.188 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:16:58.188 "assigned_rate_limits": { 00:16:58.188 "rw_ios_per_sec": 0, 00:16:58.188 "rw_mbytes_per_sec": 0, 00:16:58.188 "r_mbytes_per_sec": 0, 00:16:58.188 "w_mbytes_per_sec": 0 00:16:58.188 }, 00:16:58.188 "claimed": false, 00:16:58.188 "zoned": false, 00:16:58.188 "supported_io_types": { 00:16:58.188 "read": true, 00:16:58.188 "write": true, 00:16:58.188 "unmap": false, 00:16:58.188 "flush": false, 00:16:58.188 "reset": true, 00:16:58.188 "nvme_admin": false, 00:16:58.188 "nvme_io": false, 00:16:58.188 "nvme_io_md": false, 00:16:58.188 "write_zeroes": true, 00:16:58.188 "zcopy": false, 00:16:58.188 "get_zone_info": false, 00:16:58.188 "zone_management": false, 00:16:58.188 "zone_append": false, 00:16:58.188 "compare": false, 00:16:58.188 "compare_and_write": false, 00:16:58.188 "abort": false, 00:16:58.188 "seek_hole": false, 00:16:58.188 "seek_data": false, 00:16:58.188 "copy": false, 00:16:58.188 "nvme_iov_md": false 00:16:58.188 }, 00:16:58.188 "memory_domains": [ 00:16:58.188 { 00:16:58.188 "dma_device_id": "system", 00:16:58.188 "dma_device_type": 1 00:16:58.188 }, 00:16:58.188 { 00:16:58.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.188 "dma_device_type": 2 00:16:58.188 }, 00:16:58.188 { 00:16:58.188 "dma_device_id": "system", 00:16:58.188 "dma_device_type": 1 00:16:58.188 }, 00:16:58.188 { 00:16:58.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.188 "dma_device_type": 2 00:16:58.188 }, 00:16:58.188 { 00:16:58.188 "dma_device_id": "system", 00:16:58.188 "dma_device_type": 1 00:16:58.188 }, 00:16:58.188 { 00:16:58.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.188 "dma_device_type": 2 00:16:58.188 }, 00:16:58.188 { 00:16:58.188 "dma_device_id": "system", 00:16:58.188 "dma_device_type": 1 00:16:58.188 }, 00:16:58.188 { 00:16:58.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.188 "dma_device_type": 2 00:16:58.188 } 00:16:58.188 ], 00:16:58.188 "driver_specific": { 00:16:58.188 "raid": { 00:16:58.188 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:16:58.188 "strip_size_kb": 0, 00:16:58.188 "state": "online", 00:16:58.188 "raid_level": "raid1", 00:16:58.188 "superblock": true, 00:16:58.188 "num_base_bdevs": 4, 00:16:58.188 "num_base_bdevs_discovered": 4, 00:16:58.188 "num_base_bdevs_operational": 4, 00:16:58.188 "base_bdevs_list": [ 00:16:58.188 { 00:16:58.188 "name": "pt1", 00:16:58.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.188 "is_configured": true, 00:16:58.188 "data_offset": 2048, 00:16:58.188 "data_size": 63488 00:16:58.188 }, 00:16:58.188 { 00:16:58.188 "name": "pt2", 00:16:58.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.188 "is_configured": true, 00:16:58.188 "data_offset": 2048, 00:16:58.188 "data_size": 63488 00:16:58.188 }, 00:16:58.188 { 00:16:58.188 "name": "pt3", 00:16:58.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.188 "is_configured": true, 00:16:58.188 "data_offset": 2048, 00:16:58.188 "data_size": 63488 00:16:58.188 }, 00:16:58.188 { 00:16:58.188 "name": "pt4", 00:16:58.188 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.188 "is_configured": true, 00:16:58.188 "data_offset": 2048, 00:16:58.188 "data_size": 63488 00:16:58.188 } 00:16:58.188 ] 00:16:58.188 } 00:16:58.188 } 00:16:58.188 }' 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:58.188 pt2 00:16:58.188 pt3 00:16:58.188 pt4' 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.188 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.446 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.446 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.446 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.446 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:58.446 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.446 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.446 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.446 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.446 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.446 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:58.447 [2024-11-27 04:38:45.883267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=38028778-e31c-4cbc-b8f9-07cefa348fd3 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 38028778-e31c-4cbc-b8f9-07cefa348fd3 ']' 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.447 [2024-11-27 04:38:45.942920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.447 [2024-11-27 04:38:45.942956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.447 [2024-11-27 04:38:45.943062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.447 [2024-11-27 04:38:45.943177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.447 [2024-11-27 04:38:45.943215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.447 04:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.447 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.706 [2024-11-27 04:38:46.102977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:58.706 [2024-11-27 04:38:46.105462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:58.706 [2024-11-27 04:38:46.105536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:58.706 [2024-11-27 04:38:46.105594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:58.706 [2024-11-27 04:38:46.105671] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:58.706 [2024-11-27 04:38:46.105747] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:58.706 [2024-11-27 04:38:46.105796] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:58.706 [2024-11-27 04:38:46.105832] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:58.706 [2024-11-27 04:38:46.105854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.706 [2024-11-27 04:38:46.105870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:58.706 request: 00:16:58.706 { 00:16:58.706 "name": "raid_bdev1", 00:16:58.706 "raid_level": "raid1", 00:16:58.706 "base_bdevs": [ 00:16:58.706 "malloc1", 00:16:58.706 "malloc2", 00:16:58.706 "malloc3", 00:16:58.706 "malloc4" 00:16:58.706 ], 00:16:58.706 "superblock": false, 00:16:58.706 "method": "bdev_raid_create", 00:16:58.706 "req_id": 1 00:16:58.706 } 00:16:58.706 Got JSON-RPC error response 00:16:58.706 response: 00:16:58.706 { 00:16:58.706 "code": -17, 00:16:58.706 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:58.706 } 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.706 [2024-11-27 04:38:46.166977] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.706 [2024-11-27 04:38:46.167062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.706 [2024-11-27 04:38:46.167089] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:58.706 [2024-11-27 04:38:46.167106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.706 [2024-11-27 04:38:46.169977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.706 [2024-11-27 04:38:46.170032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.706 [2024-11-27 04:38:46.170154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:58.706 [2024-11-27 04:38:46.170232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.706 pt1 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.706 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.707 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.707 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.707 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.707 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.707 "name": "raid_bdev1", 00:16:58.707 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:16:58.707 "strip_size_kb": 0, 00:16:58.707 "state": "configuring", 00:16:58.707 "raid_level": "raid1", 00:16:58.707 "superblock": true, 00:16:58.707 "num_base_bdevs": 4, 00:16:58.707 "num_base_bdevs_discovered": 1, 00:16:58.707 "num_base_bdevs_operational": 4, 00:16:58.707 "base_bdevs_list": [ 00:16:58.707 { 00:16:58.707 "name": "pt1", 00:16:58.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.707 "is_configured": true, 00:16:58.707 "data_offset": 2048, 00:16:58.707 "data_size": 63488 00:16:58.707 }, 00:16:58.707 { 00:16:58.707 "name": null, 00:16:58.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.707 "is_configured": false, 00:16:58.707 "data_offset": 2048, 00:16:58.707 "data_size": 63488 00:16:58.707 }, 00:16:58.707 { 00:16:58.707 "name": null, 00:16:58.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.707 "is_configured": false, 00:16:58.707 "data_offset": 2048, 00:16:58.707 "data_size": 63488 00:16:58.707 }, 00:16:58.707 { 00:16:58.707 "name": null, 00:16:58.707 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.707 "is_configured": false, 00:16:58.707 "data_offset": 2048, 00:16:58.707 "data_size": 63488 00:16:58.707 } 00:16:58.707 ] 00:16:58.707 }' 00:16:58.707 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.707 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.274 [2024-11-27 04:38:46.691180] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.274 [2024-11-27 04:38:46.691274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.274 [2024-11-27 04:38:46.691306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:59.274 [2024-11-27 04:38:46.691324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.274 [2024-11-27 04:38:46.691896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.274 [2024-11-27 04:38:46.691926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.274 [2024-11-27 04:38:46.692027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.274 [2024-11-27 04:38:46.692066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.274 pt2 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.274 [2024-11-27 04:38:46.699160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.274 "name": "raid_bdev1", 00:16:59.274 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:16:59.274 "strip_size_kb": 0, 00:16:59.274 "state": "configuring", 00:16:59.274 "raid_level": "raid1", 00:16:59.274 "superblock": true, 00:16:59.274 "num_base_bdevs": 4, 00:16:59.274 "num_base_bdevs_discovered": 1, 00:16:59.274 "num_base_bdevs_operational": 4, 00:16:59.274 "base_bdevs_list": [ 00:16:59.274 { 00:16:59.274 "name": "pt1", 00:16:59.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.274 "is_configured": true, 00:16:59.274 "data_offset": 2048, 00:16:59.274 "data_size": 63488 00:16:59.274 }, 00:16:59.274 { 00:16:59.274 "name": null, 00:16:59.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.274 "is_configured": false, 00:16:59.274 "data_offset": 0, 00:16:59.274 "data_size": 63488 00:16:59.274 }, 00:16:59.274 { 00:16:59.274 "name": null, 00:16:59.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.274 "is_configured": false, 00:16:59.274 "data_offset": 2048, 00:16:59.274 "data_size": 63488 00:16:59.274 }, 00:16:59.274 { 00:16:59.274 "name": null, 00:16:59.274 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.274 "is_configured": false, 00:16:59.274 "data_offset": 2048, 00:16:59.274 "data_size": 63488 00:16:59.274 } 00:16:59.274 ] 00:16:59.274 }' 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.274 04:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.841 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:59.841 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.841 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.841 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.841 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.841 [2024-11-27 04:38:47.227319] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.841 [2024-11-27 04:38:47.227397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.841 [2024-11-27 04:38:47.227433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:59.841 [2024-11-27 04:38:47.227448] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.841 [2024-11-27 04:38:47.228043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.841 [2024-11-27 04:38:47.228078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.841 [2024-11-27 04:38:47.228190] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.841 [2024-11-27 04:38:47.228224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.841 pt2 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.842 [2024-11-27 04:38:47.235278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:59.842 [2024-11-27 04:38:47.235334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.842 [2024-11-27 04:38:47.235361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:59.842 [2024-11-27 04:38:47.235374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.842 [2024-11-27 04:38:47.235850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.842 [2024-11-27 04:38:47.235884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:59.842 [2024-11-27 04:38:47.235971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:59.842 [2024-11-27 04:38:47.236000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:59.842 pt3 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.842 [2024-11-27 04:38:47.243254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:59.842 [2024-11-27 04:38:47.243305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.842 [2024-11-27 04:38:47.243330] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:59.842 [2024-11-27 04:38:47.243343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.842 [2024-11-27 04:38:47.243838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.842 [2024-11-27 04:38:47.243872] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:59.842 [2024-11-27 04:38:47.243957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:59.842 [2024-11-27 04:38:47.243993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:59.842 [2024-11-27 04:38:47.244175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:59.842 [2024-11-27 04:38:47.244191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:59.842 [2024-11-27 04:38:47.244495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:59.842 [2024-11-27 04:38:47.244688] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:59.842 [2024-11-27 04:38:47.244708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:59.842 [2024-11-27 04:38:47.244895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.842 pt4 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.842 "name": "raid_bdev1", 00:16:59.842 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:16:59.842 "strip_size_kb": 0, 00:16:59.842 "state": "online", 00:16:59.842 "raid_level": "raid1", 00:16:59.842 "superblock": true, 00:16:59.842 "num_base_bdevs": 4, 00:16:59.842 "num_base_bdevs_discovered": 4, 00:16:59.842 "num_base_bdevs_operational": 4, 00:16:59.842 "base_bdevs_list": [ 00:16:59.842 { 00:16:59.842 "name": "pt1", 00:16:59.842 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.842 "is_configured": true, 00:16:59.842 "data_offset": 2048, 00:16:59.842 "data_size": 63488 00:16:59.842 }, 00:16:59.842 { 00:16:59.842 "name": "pt2", 00:16:59.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.842 "is_configured": true, 00:16:59.842 "data_offset": 2048, 00:16:59.842 "data_size": 63488 00:16:59.842 }, 00:16:59.842 { 00:16:59.842 "name": "pt3", 00:16:59.842 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.842 "is_configured": true, 00:16:59.842 "data_offset": 2048, 00:16:59.842 "data_size": 63488 00:16:59.842 }, 00:16:59.842 { 00:16:59.842 "name": "pt4", 00:16:59.842 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.842 "is_configured": true, 00:16:59.842 "data_offset": 2048, 00:16:59.842 "data_size": 63488 00:16:59.842 } 00:16:59.842 ] 00:16:59.842 }' 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.842 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.409 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:00.409 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:00.409 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.410 [2024-11-27 04:38:47.755871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:00.410 "name": "raid_bdev1", 00:17:00.410 "aliases": [ 00:17:00.410 "38028778-e31c-4cbc-b8f9-07cefa348fd3" 00:17:00.410 ], 00:17:00.410 "product_name": "Raid Volume", 00:17:00.410 "block_size": 512, 00:17:00.410 "num_blocks": 63488, 00:17:00.410 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:17:00.410 "assigned_rate_limits": { 00:17:00.410 "rw_ios_per_sec": 0, 00:17:00.410 "rw_mbytes_per_sec": 0, 00:17:00.410 "r_mbytes_per_sec": 0, 00:17:00.410 "w_mbytes_per_sec": 0 00:17:00.410 }, 00:17:00.410 "claimed": false, 00:17:00.410 "zoned": false, 00:17:00.410 "supported_io_types": { 00:17:00.410 "read": true, 00:17:00.410 "write": true, 00:17:00.410 "unmap": false, 00:17:00.410 "flush": false, 00:17:00.410 "reset": true, 00:17:00.410 "nvme_admin": false, 00:17:00.410 "nvme_io": false, 00:17:00.410 "nvme_io_md": false, 00:17:00.410 "write_zeroes": true, 00:17:00.410 "zcopy": false, 00:17:00.410 "get_zone_info": false, 00:17:00.410 "zone_management": false, 00:17:00.410 "zone_append": false, 00:17:00.410 "compare": false, 00:17:00.410 "compare_and_write": false, 00:17:00.410 "abort": false, 00:17:00.410 "seek_hole": false, 00:17:00.410 "seek_data": false, 00:17:00.410 "copy": false, 00:17:00.410 "nvme_iov_md": false 00:17:00.410 }, 00:17:00.410 "memory_domains": [ 00:17:00.410 { 00:17:00.410 "dma_device_id": "system", 00:17:00.410 "dma_device_type": 1 00:17:00.410 }, 00:17:00.410 { 00:17:00.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.410 "dma_device_type": 2 00:17:00.410 }, 00:17:00.410 { 00:17:00.410 "dma_device_id": "system", 00:17:00.410 "dma_device_type": 1 00:17:00.410 }, 00:17:00.410 { 00:17:00.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.410 "dma_device_type": 2 00:17:00.410 }, 00:17:00.410 { 00:17:00.410 "dma_device_id": "system", 00:17:00.410 "dma_device_type": 1 00:17:00.410 }, 00:17:00.410 { 00:17:00.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.410 "dma_device_type": 2 00:17:00.410 }, 00:17:00.410 { 00:17:00.410 "dma_device_id": "system", 00:17:00.410 "dma_device_type": 1 00:17:00.410 }, 00:17:00.410 { 00:17:00.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.410 "dma_device_type": 2 00:17:00.410 } 00:17:00.410 ], 00:17:00.410 "driver_specific": { 00:17:00.410 "raid": { 00:17:00.410 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:17:00.410 "strip_size_kb": 0, 00:17:00.410 "state": "online", 00:17:00.410 "raid_level": "raid1", 00:17:00.410 "superblock": true, 00:17:00.410 "num_base_bdevs": 4, 00:17:00.410 "num_base_bdevs_discovered": 4, 00:17:00.410 "num_base_bdevs_operational": 4, 00:17:00.410 "base_bdevs_list": [ 00:17:00.410 { 00:17:00.410 "name": "pt1", 00:17:00.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.410 "is_configured": true, 00:17:00.410 "data_offset": 2048, 00:17:00.410 "data_size": 63488 00:17:00.410 }, 00:17:00.410 { 00:17:00.410 "name": "pt2", 00:17:00.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.410 "is_configured": true, 00:17:00.410 "data_offset": 2048, 00:17:00.410 "data_size": 63488 00:17:00.410 }, 00:17:00.410 { 00:17:00.410 "name": "pt3", 00:17:00.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.410 "is_configured": true, 00:17:00.410 "data_offset": 2048, 00:17:00.410 "data_size": 63488 00:17:00.410 }, 00:17:00.410 { 00:17:00.410 "name": "pt4", 00:17:00.410 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:00.410 "is_configured": true, 00:17:00.410 "data_offset": 2048, 00:17:00.410 "data_size": 63488 00:17:00.410 } 00:17:00.410 ] 00:17:00.410 } 00:17:00.410 } 00:17:00.410 }' 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:00.410 pt2 00:17:00.410 pt3 00:17:00.410 pt4' 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.410 04:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.410 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:00.670 [2024-11-27 04:38:48.183929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 38028778-e31c-4cbc-b8f9-07cefa348fd3 '!=' 38028778-e31c-4cbc-b8f9-07cefa348fd3 ']' 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.670 [2024-11-27 04:38:48.231629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.670 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.929 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.929 "name": "raid_bdev1", 00:17:00.929 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:17:00.929 "strip_size_kb": 0, 00:17:00.929 "state": "online", 00:17:00.929 "raid_level": "raid1", 00:17:00.929 "superblock": true, 00:17:00.929 "num_base_bdevs": 4, 00:17:00.929 "num_base_bdevs_discovered": 3, 00:17:00.929 "num_base_bdevs_operational": 3, 00:17:00.929 "base_bdevs_list": [ 00:17:00.929 { 00:17:00.929 "name": null, 00:17:00.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.929 "is_configured": false, 00:17:00.929 "data_offset": 0, 00:17:00.929 "data_size": 63488 00:17:00.929 }, 00:17:00.929 { 00:17:00.929 "name": "pt2", 00:17:00.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.929 "is_configured": true, 00:17:00.929 "data_offset": 2048, 00:17:00.929 "data_size": 63488 00:17:00.929 }, 00:17:00.929 { 00:17:00.929 "name": "pt3", 00:17:00.929 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.929 "is_configured": true, 00:17:00.929 "data_offset": 2048, 00:17:00.929 "data_size": 63488 00:17:00.929 }, 00:17:00.929 { 00:17:00.929 "name": "pt4", 00:17:00.929 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:00.929 "is_configured": true, 00:17:00.929 "data_offset": 2048, 00:17:00.929 "data_size": 63488 00:17:00.929 } 00:17:00.929 ] 00:17:00.929 }' 00:17:00.929 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.929 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.187 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.187 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.187 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.187 [2024-11-27 04:38:48.763706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.187 [2024-11-27 04:38:48.763749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.187 [2024-11-27 04:38:48.763863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.187 [2024-11-27 04:38:48.763975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.187 [2024-11-27 04:38:48.764003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:01.187 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.187 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.187 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.187 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.187 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:01.187 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:01.445 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.446 [2024-11-27 04:38:48.851690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.446 [2024-11-27 04:38:48.851756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.446 [2024-11-27 04:38:48.851800] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:01.446 [2024-11-27 04:38:48.851816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.446 [2024-11-27 04:38:48.854722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.446 [2024-11-27 04:38:48.854779] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.446 [2024-11-27 04:38:48.854888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:01.446 [2024-11-27 04:38:48.854947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.446 pt2 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.446 "name": "raid_bdev1", 00:17:01.446 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:17:01.446 "strip_size_kb": 0, 00:17:01.446 "state": "configuring", 00:17:01.446 "raid_level": "raid1", 00:17:01.446 "superblock": true, 00:17:01.446 "num_base_bdevs": 4, 00:17:01.446 "num_base_bdevs_discovered": 1, 00:17:01.446 "num_base_bdevs_operational": 3, 00:17:01.446 "base_bdevs_list": [ 00:17:01.446 { 00:17:01.446 "name": null, 00:17:01.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.446 "is_configured": false, 00:17:01.446 "data_offset": 2048, 00:17:01.446 "data_size": 63488 00:17:01.446 }, 00:17:01.446 { 00:17:01.446 "name": "pt2", 00:17:01.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.446 "is_configured": true, 00:17:01.446 "data_offset": 2048, 00:17:01.446 "data_size": 63488 00:17:01.446 }, 00:17:01.446 { 00:17:01.446 "name": null, 00:17:01.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.446 "is_configured": false, 00:17:01.446 "data_offset": 2048, 00:17:01.446 "data_size": 63488 00:17:01.446 }, 00:17:01.446 { 00:17:01.446 "name": null, 00:17:01.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:01.446 "is_configured": false, 00:17:01.446 "data_offset": 2048, 00:17:01.446 "data_size": 63488 00:17:01.446 } 00:17:01.446 ] 00:17:01.446 }' 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.446 04:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.028 [2024-11-27 04:38:49.383901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:02.028 [2024-11-27 04:38:49.383984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.028 [2024-11-27 04:38:49.384018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:02.028 [2024-11-27 04:38:49.384033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.028 [2024-11-27 04:38:49.384613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.028 [2024-11-27 04:38:49.384651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:02.028 [2024-11-27 04:38:49.384789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:02.028 [2024-11-27 04:38:49.384825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:02.028 pt3 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.028 "name": "raid_bdev1", 00:17:02.028 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:17:02.028 "strip_size_kb": 0, 00:17:02.028 "state": "configuring", 00:17:02.028 "raid_level": "raid1", 00:17:02.028 "superblock": true, 00:17:02.028 "num_base_bdevs": 4, 00:17:02.028 "num_base_bdevs_discovered": 2, 00:17:02.028 "num_base_bdevs_operational": 3, 00:17:02.028 "base_bdevs_list": [ 00:17:02.028 { 00:17:02.028 "name": null, 00:17:02.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.028 "is_configured": false, 00:17:02.028 "data_offset": 2048, 00:17:02.028 "data_size": 63488 00:17:02.028 }, 00:17:02.028 { 00:17:02.028 "name": "pt2", 00:17:02.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.028 "is_configured": true, 00:17:02.028 "data_offset": 2048, 00:17:02.028 "data_size": 63488 00:17:02.028 }, 00:17:02.028 { 00:17:02.028 "name": "pt3", 00:17:02.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.028 "is_configured": true, 00:17:02.028 "data_offset": 2048, 00:17:02.028 "data_size": 63488 00:17:02.028 }, 00:17:02.028 { 00:17:02.028 "name": null, 00:17:02.028 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.028 "is_configured": false, 00:17:02.028 "data_offset": 2048, 00:17:02.028 "data_size": 63488 00:17:02.028 } 00:17:02.028 ] 00:17:02.028 }' 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.028 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.594 [2024-11-27 04:38:49.928040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:02.594 [2024-11-27 04:38:49.928127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.594 [2024-11-27 04:38:49.928166] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:02.594 [2024-11-27 04:38:49.928181] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.594 [2024-11-27 04:38:49.928758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.594 [2024-11-27 04:38:49.928798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:02.594 [2024-11-27 04:38:49.928906] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:02.594 [2024-11-27 04:38:49.928938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:02.594 [2024-11-27 04:38:49.929106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:02.594 [2024-11-27 04:38:49.929122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:02.594 [2024-11-27 04:38:49.929422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:02.594 [2024-11-27 04:38:49.929611] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:02.594 [2024-11-27 04:38:49.929632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:02.594 [2024-11-27 04:38:49.929824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.594 pt4 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.594 "name": "raid_bdev1", 00:17:02.594 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:17:02.594 "strip_size_kb": 0, 00:17:02.594 "state": "online", 00:17:02.594 "raid_level": "raid1", 00:17:02.594 "superblock": true, 00:17:02.594 "num_base_bdevs": 4, 00:17:02.594 "num_base_bdevs_discovered": 3, 00:17:02.594 "num_base_bdevs_operational": 3, 00:17:02.594 "base_bdevs_list": [ 00:17:02.594 { 00:17:02.594 "name": null, 00:17:02.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.594 "is_configured": false, 00:17:02.594 "data_offset": 2048, 00:17:02.594 "data_size": 63488 00:17:02.594 }, 00:17:02.594 { 00:17:02.594 "name": "pt2", 00:17:02.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.594 "is_configured": true, 00:17:02.594 "data_offset": 2048, 00:17:02.594 "data_size": 63488 00:17:02.594 }, 00:17:02.594 { 00:17:02.594 "name": "pt3", 00:17:02.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.594 "is_configured": true, 00:17:02.594 "data_offset": 2048, 00:17:02.594 "data_size": 63488 00:17:02.594 }, 00:17:02.594 { 00:17:02.594 "name": "pt4", 00:17:02.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.594 "is_configured": true, 00:17:02.594 "data_offset": 2048, 00:17:02.594 "data_size": 63488 00:17:02.594 } 00:17:02.594 ] 00:17:02.594 }' 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.594 04:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.949 [2024-11-27 04:38:50.400108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.949 [2024-11-27 04:38:50.400147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.949 [2024-11-27 04:38:50.400246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.949 [2024-11-27 04:38:50.400348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.949 [2024-11-27 04:38:50.400370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.949 [2024-11-27 04:38:50.472123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.949 [2024-11-27 04:38:50.472204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.949 [2024-11-27 04:38:50.472233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:02.949 [2024-11-27 04:38:50.472253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.949 [2024-11-27 04:38:50.475229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.949 [2024-11-27 04:38:50.475280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.949 [2024-11-27 04:38:50.475390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:02.949 [2024-11-27 04:38:50.475459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.949 [2024-11-27 04:38:50.475640] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:02.949 [2024-11-27 04:38:50.475665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.949 [2024-11-27 04:38:50.475687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:02.949 [2024-11-27 04:38:50.475785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.949 [2024-11-27 04:38:50.475945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:02.949 pt1 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.949 "name": "raid_bdev1", 00:17:02.949 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:17:02.949 "strip_size_kb": 0, 00:17:02.949 "state": "configuring", 00:17:02.949 "raid_level": "raid1", 00:17:02.949 "superblock": true, 00:17:02.949 "num_base_bdevs": 4, 00:17:02.949 "num_base_bdevs_discovered": 2, 00:17:02.949 "num_base_bdevs_operational": 3, 00:17:02.949 "base_bdevs_list": [ 00:17:02.949 { 00:17:02.949 "name": null, 00:17:02.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.949 "is_configured": false, 00:17:02.949 "data_offset": 2048, 00:17:02.949 "data_size": 63488 00:17:02.949 }, 00:17:02.949 { 00:17:02.949 "name": "pt2", 00:17:02.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.949 "is_configured": true, 00:17:02.949 "data_offset": 2048, 00:17:02.949 "data_size": 63488 00:17:02.949 }, 00:17:02.949 { 00:17:02.949 "name": "pt3", 00:17:02.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.949 "is_configured": true, 00:17:02.949 "data_offset": 2048, 00:17:02.949 "data_size": 63488 00:17:02.949 }, 00:17:02.949 { 00:17:02.949 "name": null, 00:17:02.949 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.949 "is_configured": false, 00:17:02.949 "data_offset": 2048, 00:17:02.949 "data_size": 63488 00:17:02.949 } 00:17:02.949 ] 00:17:02.949 }' 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.949 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.514 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:03.515 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.515 04:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:03.515 04:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.515 [2024-11-27 04:38:51.040288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:03.515 [2024-11-27 04:38:51.040367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.515 [2024-11-27 04:38:51.040400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:03.515 [2024-11-27 04:38:51.040414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.515 [2024-11-27 04:38:51.040986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.515 [2024-11-27 04:38:51.041027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:03.515 [2024-11-27 04:38:51.041135] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:03.515 [2024-11-27 04:38:51.041168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:03.515 [2024-11-27 04:38:51.041335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:03.515 [2024-11-27 04:38:51.041351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:03.515 [2024-11-27 04:38:51.041663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:03.515 [2024-11-27 04:38:51.041869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:03.515 [2024-11-27 04:38:51.041889] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:03.515 [2024-11-27 04:38:51.042078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.515 pt4 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.515 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.515 "name": "raid_bdev1", 00:17:03.515 "uuid": "38028778-e31c-4cbc-b8f9-07cefa348fd3", 00:17:03.515 "strip_size_kb": 0, 00:17:03.515 "state": "online", 00:17:03.515 "raid_level": "raid1", 00:17:03.515 "superblock": true, 00:17:03.515 "num_base_bdevs": 4, 00:17:03.515 "num_base_bdevs_discovered": 3, 00:17:03.515 "num_base_bdevs_operational": 3, 00:17:03.515 "base_bdevs_list": [ 00:17:03.515 { 00:17:03.515 "name": null, 00:17:03.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.515 "is_configured": false, 00:17:03.515 "data_offset": 2048, 00:17:03.515 "data_size": 63488 00:17:03.515 }, 00:17:03.515 { 00:17:03.515 "name": "pt2", 00:17:03.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.515 "is_configured": true, 00:17:03.515 "data_offset": 2048, 00:17:03.515 "data_size": 63488 00:17:03.515 }, 00:17:03.515 { 00:17:03.515 "name": "pt3", 00:17:03.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.515 "is_configured": true, 00:17:03.515 "data_offset": 2048, 00:17:03.515 "data_size": 63488 00:17:03.515 }, 00:17:03.515 { 00:17:03.515 "name": "pt4", 00:17:03.515 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.516 "is_configured": true, 00:17:03.516 "data_offset": 2048, 00:17:03.516 "data_size": 63488 00:17:03.516 } 00:17:03.516 ] 00:17:03.516 }' 00:17:03.516 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.516 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.082 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:04.082 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:04.082 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.082 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.082 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.082 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:04.083 [2024-11-27 04:38:51.596845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 38028778-e31c-4cbc-b8f9-07cefa348fd3 '!=' 38028778-e31c-4cbc-b8f9-07cefa348fd3 ']' 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74794 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74794 ']' 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74794 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74794 00:17:04.083 killing process with pid 74794 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74794' 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74794 00:17:04.083 [2024-11-27 04:38:51.680093] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.083 [2024-11-27 04:38:51.680211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.083 [2024-11-27 04:38:51.680315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.083 [2024-11-27 04:38:51.680337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:04.083 04:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74794 00:17:04.649 [2024-11-27 04:38:52.037118] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.584 ************************************ 00:17:05.584 END TEST raid_superblock_test 00:17:05.584 ************************************ 00:17:05.584 04:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:05.584 00:17:05.584 real 0m9.488s 00:17:05.584 user 0m15.656s 00:17:05.584 sys 0m1.298s 00:17:05.584 04:38:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.584 04:38:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.584 04:38:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:17:05.584 04:38:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:05.584 04:38:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.584 04:38:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.584 ************************************ 00:17:05.584 START TEST raid_read_error_test 00:17:05.584 ************************************ 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:05.584 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:05.585 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FCSxzHouSR 00:17:05.585 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75292 00:17:05.585 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:05.585 04:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75292 00:17:05.585 04:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75292 ']' 00:17:05.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.585 04:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.585 04:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.585 04:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.585 04:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.585 04:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.843 [2024-11-27 04:38:53.276428] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:17:05.843 [2024-11-27 04:38:53.276602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75292 ] 00:17:05.843 [2024-11-27 04:38:53.462291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.101 [2024-11-27 04:38:53.646829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.361 [2024-11-27 04:38:53.855531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.361 [2024-11-27 04:38:53.855615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 BaseBdev1_malloc 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 true 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 [2024-11-27 04:38:54.342992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:06.930 [2024-11-27 04:38:54.343204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.930 [2024-11-27 04:38:54.343246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:06.930 [2024-11-27 04:38:54.343266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.930 [2024-11-27 04:38:54.346124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.930 [2024-11-27 04:38:54.346178] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.930 BaseBdev1 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 BaseBdev2_malloc 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 true 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 [2024-11-27 04:38:54.399216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:06.930 [2024-11-27 04:38:54.399289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.930 [2024-11-27 04:38:54.399316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:06.930 [2024-11-27 04:38:54.399332] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.930 [2024-11-27 04:38:54.402120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.930 [2024-11-27 04:38:54.402170] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.930 BaseBdev2 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 BaseBdev3_malloc 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 true 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 [2024-11-27 04:38:54.471584] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:06.930 [2024-11-27 04:38:54.471669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.930 [2024-11-27 04:38:54.471703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:06.930 [2024-11-27 04:38:54.471720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.930 [2024-11-27 04:38:54.474728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.930 [2024-11-27 04:38:54.474801] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:06.930 BaseBdev3 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 BaseBdev4_malloc 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.931 true 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.931 [2024-11-27 04:38:54.531822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:06.931 [2024-11-27 04:38:54.531895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.931 [2024-11-27 04:38:54.531927] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:06.931 [2024-11-27 04:38:54.531945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.931 [2024-11-27 04:38:54.534743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.931 [2024-11-27 04:38:54.534813] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:06.931 BaseBdev4 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.931 [2024-11-27 04:38:54.539892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.931 [2024-11-27 04:38:54.542299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.931 [2024-11-27 04:38:54.542543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.931 [2024-11-27 04:38:54.542656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:06.931 [2024-11-27 04:38:54.542980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:06.931 [2024-11-27 04:38:54.543008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:06.931 [2024-11-27 04:38:54.543329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:06.931 [2024-11-27 04:38:54.543552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:06.931 [2024-11-27 04:38:54.543569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:06.931 [2024-11-27 04:38:54.543824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.931 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.188 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.188 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.188 "name": "raid_bdev1", 00:17:07.188 "uuid": "7469f797-0bda-4063-8bc3-51572cd5e031", 00:17:07.188 "strip_size_kb": 0, 00:17:07.188 "state": "online", 00:17:07.188 "raid_level": "raid1", 00:17:07.188 "superblock": true, 00:17:07.188 "num_base_bdevs": 4, 00:17:07.188 "num_base_bdevs_discovered": 4, 00:17:07.188 "num_base_bdevs_operational": 4, 00:17:07.188 "base_bdevs_list": [ 00:17:07.188 { 00:17:07.188 "name": "BaseBdev1", 00:17:07.188 "uuid": "82f1d754-d34d-5c8d-a27b-4d4fbae67f1e", 00:17:07.188 "is_configured": true, 00:17:07.188 "data_offset": 2048, 00:17:07.188 "data_size": 63488 00:17:07.188 }, 00:17:07.188 { 00:17:07.188 "name": "BaseBdev2", 00:17:07.188 "uuid": "4598081e-0a8c-520f-a05e-20a31c8eb51e", 00:17:07.188 "is_configured": true, 00:17:07.188 "data_offset": 2048, 00:17:07.188 "data_size": 63488 00:17:07.188 }, 00:17:07.188 { 00:17:07.188 "name": "BaseBdev3", 00:17:07.188 "uuid": "e8eb9b2a-a086-58f9-879f-ed09d92bf0e8", 00:17:07.188 "is_configured": true, 00:17:07.188 "data_offset": 2048, 00:17:07.188 "data_size": 63488 00:17:07.188 }, 00:17:07.188 { 00:17:07.188 "name": "BaseBdev4", 00:17:07.188 "uuid": "d4f4b29f-5b6a-5dcc-a871-c51fee3edd2a", 00:17:07.188 "is_configured": true, 00:17:07.188 "data_offset": 2048, 00:17:07.188 "data_size": 63488 00:17:07.188 } 00:17:07.188 ] 00:17:07.188 }' 00:17:07.188 04:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.188 04:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.446 04:38:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:07.446 04:38:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:07.702 [2024-11-27 04:38:55.181466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.651 "name": "raid_bdev1", 00:17:08.651 "uuid": "7469f797-0bda-4063-8bc3-51572cd5e031", 00:17:08.651 "strip_size_kb": 0, 00:17:08.651 "state": "online", 00:17:08.651 "raid_level": "raid1", 00:17:08.651 "superblock": true, 00:17:08.651 "num_base_bdevs": 4, 00:17:08.651 "num_base_bdevs_discovered": 4, 00:17:08.651 "num_base_bdevs_operational": 4, 00:17:08.651 "base_bdevs_list": [ 00:17:08.651 { 00:17:08.651 "name": "BaseBdev1", 00:17:08.651 "uuid": "82f1d754-d34d-5c8d-a27b-4d4fbae67f1e", 00:17:08.651 "is_configured": true, 00:17:08.651 "data_offset": 2048, 00:17:08.651 "data_size": 63488 00:17:08.651 }, 00:17:08.651 { 00:17:08.651 "name": "BaseBdev2", 00:17:08.651 "uuid": "4598081e-0a8c-520f-a05e-20a31c8eb51e", 00:17:08.651 "is_configured": true, 00:17:08.651 "data_offset": 2048, 00:17:08.651 "data_size": 63488 00:17:08.651 }, 00:17:08.651 { 00:17:08.651 "name": "BaseBdev3", 00:17:08.651 "uuid": "e8eb9b2a-a086-58f9-879f-ed09d92bf0e8", 00:17:08.651 "is_configured": true, 00:17:08.651 "data_offset": 2048, 00:17:08.651 "data_size": 63488 00:17:08.651 }, 00:17:08.651 { 00:17:08.651 "name": "BaseBdev4", 00:17:08.651 "uuid": "d4f4b29f-5b6a-5dcc-a871-c51fee3edd2a", 00:17:08.651 "is_configured": true, 00:17:08.651 "data_offset": 2048, 00:17:08.651 "data_size": 63488 00:17:08.651 } 00:17:08.651 ] 00:17:08.651 }' 00:17:08.651 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.652 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.221 [2024-11-27 04:38:56.607233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.221 [2024-11-27 04:38:56.607275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.221 [2024-11-27 04:38:56.610841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.221 [2024-11-27 04:38:56.610920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.221 [2024-11-27 04:38:56.611082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.221 [2024-11-27 04:38:56.611104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:09.221 { 00:17:09.221 "results": [ 00:17:09.221 { 00:17:09.221 "job": "raid_bdev1", 00:17:09.221 "core_mask": "0x1", 00:17:09.221 "workload": "randrw", 00:17:09.221 "percentage": 50, 00:17:09.221 "status": "finished", 00:17:09.221 "queue_depth": 1, 00:17:09.221 "io_size": 131072, 00:17:09.221 "runtime": 1.423286, 00:17:09.221 "iops": 7421.558281329262, 00:17:09.221 "mibps": 927.6947851661578, 00:17:09.221 "io_failed": 0, 00:17:09.221 "io_timeout": 0, 00:17:09.221 "avg_latency_us": 130.1720551151963, 00:17:09.221 "min_latency_us": 43.985454545454544, 00:17:09.221 "max_latency_us": 1995.8690909090908 00:17:09.221 } 00:17:09.221 ], 00:17:09.221 "core_count": 1 00:17:09.221 } 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75292 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75292 ']' 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75292 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75292 00:17:09.221 killing process with pid 75292 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75292' 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75292 00:17:09.221 [2024-11-27 04:38:56.648660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:09.221 04:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75292 00:17:09.478 [2024-11-27 04:38:56.939476] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.853 04:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FCSxzHouSR 00:17:10.853 04:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:10.853 04:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:10.853 04:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:10.853 04:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:10.853 04:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:10.853 04:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:10.853 04:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:10.853 00:17:10.853 real 0m4.907s 00:17:10.853 user 0m6.084s 00:17:10.853 sys 0m0.609s 00:17:10.853 ************************************ 00:17:10.853 END TEST raid_read_error_test 00:17:10.853 ************************************ 00:17:10.853 04:38:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.853 04:38:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.853 04:38:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:17:10.853 04:38:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:10.853 04:38:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.853 04:38:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.853 ************************************ 00:17:10.853 START TEST raid_write_error_test 00:17:10.853 ************************************ 00:17:10.853 04:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:17:10.853 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:10.853 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:10.853 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:10.853 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:10.853 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:10.853 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:10.853 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:10.853 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1IVyBSxLPy 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75438 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75438 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75438 ']' 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.854 04:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.854 [2024-11-27 04:38:58.244544] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:17:10.854 [2024-11-27 04:38:58.245005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75438 ] 00:17:10.854 [2024-11-27 04:38:58.434605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.112 [2024-11-27 04:38:58.594233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.370 [2024-11-27 04:38:58.801626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.370 [2024-11-27 04:38:58.801706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.629 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.629 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:11.629 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:11.629 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:11.629 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.629 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 BaseBdev1_malloc 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 true 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 [2024-11-27 04:38:59.286440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:11.971 [2024-11-27 04:38:59.286531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.971 [2024-11-27 04:38:59.286574] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:11.971 [2024-11-27 04:38:59.286593] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.971 [2024-11-27 04:38:59.289417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.971 [2024-11-27 04:38:59.289480] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:11.971 BaseBdev1 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 BaseBdev2_malloc 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 true 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 [2024-11-27 04:38:59.350486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:11.971 [2024-11-27 04:38:59.350560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.971 [2024-11-27 04:38:59.350588] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:11.971 [2024-11-27 04:38:59.350606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.971 [2024-11-27 04:38:59.353431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.971 [2024-11-27 04:38:59.353482] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:11.971 BaseBdev2 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 BaseBdev3_malloc 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:11.971 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.972 true 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.972 [2024-11-27 04:38:59.440650] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:11.972 [2024-11-27 04:38:59.440729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.972 [2024-11-27 04:38:59.440759] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:11.972 [2024-11-27 04:38:59.440798] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.972 [2024-11-27 04:38:59.443606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.972 [2024-11-27 04:38:59.443661] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:11.972 BaseBdev3 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.972 BaseBdev4_malloc 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.972 true 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.972 [2024-11-27 04:38:59.501703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:11.972 [2024-11-27 04:38:59.501824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.972 [2024-11-27 04:38:59.501877] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:11.972 [2024-11-27 04:38:59.501922] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.972 [2024-11-27 04:38:59.504915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.972 [2024-11-27 04:38:59.504971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:11.972 BaseBdev4 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.972 [2024-11-27 04:38:59.509829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.972 [2024-11-27 04:38:59.512384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.972 [2024-11-27 04:38:59.512630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.972 [2024-11-27 04:38:59.512749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:11.972 [2024-11-27 04:38:59.513105] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:11.972 [2024-11-27 04:38:59.513133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:11.972 [2024-11-27 04:38:59.513456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:11.972 [2024-11-27 04:38:59.513692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:11.972 [2024-11-27 04:38:59.513709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:11.972 [2024-11-27 04:38:59.514008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.972 "name": "raid_bdev1", 00:17:11.972 "uuid": "3be13bef-4dd2-413c-825f-04734e344c89", 00:17:11.972 "strip_size_kb": 0, 00:17:11.972 "state": "online", 00:17:11.972 "raid_level": "raid1", 00:17:11.972 "superblock": true, 00:17:11.972 "num_base_bdevs": 4, 00:17:11.972 "num_base_bdevs_discovered": 4, 00:17:11.972 "num_base_bdevs_operational": 4, 00:17:11.972 "base_bdevs_list": [ 00:17:11.972 { 00:17:11.972 "name": "BaseBdev1", 00:17:11.972 "uuid": "9adf099b-7fa0-58fb-a309-0d6f9572ed1d", 00:17:11.972 "is_configured": true, 00:17:11.972 "data_offset": 2048, 00:17:11.972 "data_size": 63488 00:17:11.972 }, 00:17:11.972 { 00:17:11.972 "name": "BaseBdev2", 00:17:11.972 "uuid": "c0da1c23-f0b5-56ce-ac55-131eb9808892", 00:17:11.972 "is_configured": true, 00:17:11.972 "data_offset": 2048, 00:17:11.972 "data_size": 63488 00:17:11.972 }, 00:17:11.972 { 00:17:11.972 "name": "BaseBdev3", 00:17:11.972 "uuid": "19435646-1882-5f98-8d5e-5e79c05bd2f2", 00:17:11.972 "is_configured": true, 00:17:11.972 "data_offset": 2048, 00:17:11.972 "data_size": 63488 00:17:11.972 }, 00:17:11.972 { 00:17:11.972 "name": "BaseBdev4", 00:17:11.972 "uuid": "1c1df714-3d55-5605-ac92-19d2b7fa0fdf", 00:17:11.972 "is_configured": true, 00:17:11.972 "data_offset": 2048, 00:17:11.972 "data_size": 63488 00:17:11.972 } 00:17:11.972 ] 00:17:11.972 }' 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.972 04:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.545 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:12.545 04:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:12.545 [2024-11-27 04:39:00.123536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.481 [2024-11-27 04:39:01.008203] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:13.481 [2024-11-27 04:39:01.008274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.481 [2024-11-27 04:39:01.008565] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.481 "name": "raid_bdev1", 00:17:13.481 "uuid": "3be13bef-4dd2-413c-825f-04734e344c89", 00:17:13.481 "strip_size_kb": 0, 00:17:13.481 "state": "online", 00:17:13.481 "raid_level": "raid1", 00:17:13.481 "superblock": true, 00:17:13.481 "num_base_bdevs": 4, 00:17:13.481 "num_base_bdevs_discovered": 3, 00:17:13.481 "num_base_bdevs_operational": 3, 00:17:13.481 "base_bdevs_list": [ 00:17:13.481 { 00:17:13.481 "name": null, 00:17:13.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.481 "is_configured": false, 00:17:13.481 "data_offset": 0, 00:17:13.481 "data_size": 63488 00:17:13.481 }, 00:17:13.481 { 00:17:13.481 "name": "BaseBdev2", 00:17:13.481 "uuid": "c0da1c23-f0b5-56ce-ac55-131eb9808892", 00:17:13.481 "is_configured": true, 00:17:13.481 "data_offset": 2048, 00:17:13.481 "data_size": 63488 00:17:13.481 }, 00:17:13.481 { 00:17:13.481 "name": "BaseBdev3", 00:17:13.481 "uuid": "19435646-1882-5f98-8d5e-5e79c05bd2f2", 00:17:13.481 "is_configured": true, 00:17:13.481 "data_offset": 2048, 00:17:13.481 "data_size": 63488 00:17:13.481 }, 00:17:13.481 { 00:17:13.481 "name": "BaseBdev4", 00:17:13.481 "uuid": "1c1df714-3d55-5605-ac92-19d2b7fa0fdf", 00:17:13.481 "is_configured": true, 00:17:13.481 "data_offset": 2048, 00:17:13.481 "data_size": 63488 00:17:13.481 } 00:17:13.481 ] 00:17:13.481 }' 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.481 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.047 [2024-11-27 04:39:01.540482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.047 [2024-11-27 04:39:01.540520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.047 [2024-11-27 04:39:01.543940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.047 [2024-11-27 04:39:01.544002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.047 [2024-11-27 04:39:01.544143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.047 [2024-11-27 04:39:01.544164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:14.047 { 00:17:14.047 "results": [ 00:17:14.047 { 00:17:14.047 "job": "raid_bdev1", 00:17:14.047 "core_mask": "0x1", 00:17:14.047 "workload": "randrw", 00:17:14.047 "percentage": 50, 00:17:14.047 "status": "finished", 00:17:14.047 "queue_depth": 1, 00:17:14.047 "io_size": 131072, 00:17:14.047 "runtime": 1.414231, 00:17:14.047 "iops": 7985.258419593404, 00:17:14.047 "mibps": 998.1573024491755, 00:17:14.047 "io_failed": 0, 00:17:14.047 "io_timeout": 0, 00:17:14.047 "avg_latency_us": 120.7519869911369, 00:17:14.047 "min_latency_us": 45.38181818181818, 00:17:14.047 "max_latency_us": 1891.6072727272726 00:17:14.047 } 00:17:14.047 ], 00:17:14.047 "core_count": 1 00:17:14.047 } 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75438 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75438 ']' 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75438 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75438 00:17:14.047 killing process with pid 75438 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75438' 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75438 00:17:14.047 [2024-11-27 04:39:01.576284] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.047 04:39:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75438 00:17:14.305 [2024-11-27 04:39:01.867267] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:15.678 04:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1IVyBSxLPy 00:17:15.678 04:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:15.678 04:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:15.678 04:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:15.678 04:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:15.678 04:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:15.678 04:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:15.678 04:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:15.678 00:17:15.678 real 0m4.867s 00:17:15.678 user 0m5.997s 00:17:15.678 sys 0m0.586s 00:17:15.678 04:39:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.678 04:39:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.678 ************************************ 00:17:15.678 END TEST raid_write_error_test 00:17:15.678 ************************************ 00:17:15.678 04:39:03 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:17:15.678 04:39:03 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:17:15.678 04:39:03 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:17:15.678 04:39:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:15.678 04:39:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.678 04:39:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.678 ************************************ 00:17:15.678 START TEST raid_rebuild_test 00:17:15.678 ************************************ 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75587 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75587 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75587 ']' 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.678 04:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.678 [2024-11-27 04:39:03.142788] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:17:15.678 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:15.678 Zero copy mechanism will not be used. 00:17:15.678 [2024-11-27 04:39:03.142973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75587 ] 00:17:15.937 [2024-11-27 04:39:03.331525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.937 [2024-11-27 04:39:03.486050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.196 [2024-11-27 04:39:03.704457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.196 [2024-11-27 04:39:03.704500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.762 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.762 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:16.762 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:16.762 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:16.762 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.762 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.762 BaseBdev1_malloc 00:17:16.762 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.762 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:16.762 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.762 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.762 [2024-11-27 04:39:04.255605] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:16.762 [2024-11-27 04:39:04.255682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.763 [2024-11-27 04:39:04.255714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:16.763 [2024-11-27 04:39:04.255733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.763 [2024-11-27 04:39:04.258519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.763 [2024-11-27 04:39:04.258572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:16.763 BaseBdev1 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.763 BaseBdev2_malloc 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.763 [2024-11-27 04:39:04.311453] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:16.763 [2024-11-27 04:39:04.311532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.763 [2024-11-27 04:39:04.311565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:16.763 [2024-11-27 04:39:04.311584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.763 [2024-11-27 04:39:04.314349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.763 [2024-11-27 04:39:04.314399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:16.763 BaseBdev2 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.763 spare_malloc 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.763 spare_delay 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.763 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.763 [2024-11-27 04:39:04.381752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.763 [2024-11-27 04:39:04.381844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.763 [2024-11-27 04:39:04.381877] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:16.763 [2024-11-27 04:39:04.381895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.021 [2024-11-27 04:39:04.384710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.021 [2024-11-27 04:39:04.384764] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:17.021 spare 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.021 [2024-11-27 04:39:04.389835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.021 [2024-11-27 04:39:04.392230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:17.021 [2024-11-27 04:39:04.392366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:17.021 [2024-11-27 04:39:04.392390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:17.021 [2024-11-27 04:39:04.392711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:17.021 [2024-11-27 04:39:04.392965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:17.021 [2024-11-27 04:39:04.392995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:17.021 [2024-11-27 04:39:04.393193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.021 "name": "raid_bdev1", 00:17:17.021 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:17.021 "strip_size_kb": 0, 00:17:17.021 "state": "online", 00:17:17.021 "raid_level": "raid1", 00:17:17.021 "superblock": false, 00:17:17.021 "num_base_bdevs": 2, 00:17:17.021 "num_base_bdevs_discovered": 2, 00:17:17.021 "num_base_bdevs_operational": 2, 00:17:17.021 "base_bdevs_list": [ 00:17:17.021 { 00:17:17.021 "name": "BaseBdev1", 00:17:17.021 "uuid": "db536733-e4c1-5736-b08d-876ed951629f", 00:17:17.021 "is_configured": true, 00:17:17.021 "data_offset": 0, 00:17:17.021 "data_size": 65536 00:17:17.021 }, 00:17:17.021 { 00:17:17.021 "name": "BaseBdev2", 00:17:17.021 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:17.021 "is_configured": true, 00:17:17.021 "data_offset": 0, 00:17:17.021 "data_size": 65536 00:17:17.021 } 00:17:17.021 ] 00:17:17.021 }' 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.021 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.588 [2024-11-27 04:39:04.926315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.588 04:39:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.588 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:17.846 [2024-11-27 04:39:05.318129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:17.846 /dev/nbd0 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.846 1+0 records in 00:17:17.846 1+0 records out 00:17:17.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375485 s, 10.9 MB/s 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.846 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:17.847 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.847 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.847 04:39:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:17.847 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.847 04:39:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.847 04:39:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:17.847 04:39:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:17.847 04:39:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:24.405 65536+0 records in 00:17:24.405 65536+0 records out 00:17:24.405 33554432 bytes (34 MB, 32 MiB) copied, 6.58444 s, 5.1 MB/s 00:17:24.405 04:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:24.405 04:39:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.405 04:39:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:24.405 04:39:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.405 04:39:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:24.405 04:39:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.405 04:39:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:24.664 [2024-11-27 04:39:12.262404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.664 04:39:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.664 04:39:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.664 04:39:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.664 04:39:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.664 04:39:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.938 [2024-11-27 04:39:12.294498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.938 "name": "raid_bdev1", 00:17:24.938 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:24.938 "strip_size_kb": 0, 00:17:24.938 "state": "online", 00:17:24.938 "raid_level": "raid1", 00:17:24.938 "superblock": false, 00:17:24.938 "num_base_bdevs": 2, 00:17:24.938 "num_base_bdevs_discovered": 1, 00:17:24.938 "num_base_bdevs_operational": 1, 00:17:24.938 "base_bdevs_list": [ 00:17:24.938 { 00:17:24.938 "name": null, 00:17:24.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.938 "is_configured": false, 00:17:24.938 "data_offset": 0, 00:17:24.938 "data_size": 65536 00:17:24.938 }, 00:17:24.938 { 00:17:24.938 "name": "BaseBdev2", 00:17:24.938 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:24.938 "is_configured": true, 00:17:24.938 "data_offset": 0, 00:17:24.938 "data_size": 65536 00:17:24.938 } 00:17:24.938 ] 00:17:24.938 }' 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.938 04:39:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.199 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.199 04:39:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.199 04:39:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.199 [2024-11-27 04:39:12.794666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.199 [2024-11-27 04:39:12.811253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:17:25.199 04:39:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.199 04:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:25.199 [2024-11-27 04:39:12.813839] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.575 "name": "raid_bdev1", 00:17:26.575 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:26.575 "strip_size_kb": 0, 00:17:26.575 "state": "online", 00:17:26.575 "raid_level": "raid1", 00:17:26.575 "superblock": false, 00:17:26.575 "num_base_bdevs": 2, 00:17:26.575 "num_base_bdevs_discovered": 2, 00:17:26.575 "num_base_bdevs_operational": 2, 00:17:26.575 "process": { 00:17:26.575 "type": "rebuild", 00:17:26.575 "target": "spare", 00:17:26.575 "progress": { 00:17:26.575 "blocks": 20480, 00:17:26.575 "percent": 31 00:17:26.575 } 00:17:26.575 }, 00:17:26.575 "base_bdevs_list": [ 00:17:26.575 { 00:17:26.575 "name": "spare", 00:17:26.575 "uuid": "9e813c87-9a3c-56d3-960b-bc709987b1ba", 00:17:26.575 "is_configured": true, 00:17:26.575 "data_offset": 0, 00:17:26.575 "data_size": 65536 00:17:26.575 }, 00:17:26.575 { 00:17:26.575 "name": "BaseBdev2", 00:17:26.575 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:26.575 "is_configured": true, 00:17:26.575 "data_offset": 0, 00:17:26.575 "data_size": 65536 00:17:26.575 } 00:17:26.575 ] 00:17:26.575 }' 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.575 04:39:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.575 [2024-11-27 04:39:13.974921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.575 [2024-11-27 04:39:14.022717] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:26.575 [2024-11-27 04:39:14.023018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.575 [2024-11-27 04:39:14.023265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.575 [2024-11-27 04:39:14.023326] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.575 "name": "raid_bdev1", 00:17:26.575 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:26.575 "strip_size_kb": 0, 00:17:26.575 "state": "online", 00:17:26.575 "raid_level": "raid1", 00:17:26.575 "superblock": false, 00:17:26.575 "num_base_bdevs": 2, 00:17:26.575 "num_base_bdevs_discovered": 1, 00:17:26.575 "num_base_bdevs_operational": 1, 00:17:26.575 "base_bdevs_list": [ 00:17:26.575 { 00:17:26.575 "name": null, 00:17:26.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.575 "is_configured": false, 00:17:26.575 "data_offset": 0, 00:17:26.575 "data_size": 65536 00:17:26.575 }, 00:17:26.575 { 00:17:26.575 "name": "BaseBdev2", 00:17:26.575 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:26.575 "is_configured": true, 00:17:26.575 "data_offset": 0, 00:17:26.575 "data_size": 65536 00:17:26.575 } 00:17:26.575 ] 00:17:26.575 }' 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.575 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.142 "name": "raid_bdev1", 00:17:27.142 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:27.142 "strip_size_kb": 0, 00:17:27.142 "state": "online", 00:17:27.142 "raid_level": "raid1", 00:17:27.142 "superblock": false, 00:17:27.142 "num_base_bdevs": 2, 00:17:27.142 "num_base_bdevs_discovered": 1, 00:17:27.142 "num_base_bdevs_operational": 1, 00:17:27.142 "base_bdevs_list": [ 00:17:27.142 { 00:17:27.142 "name": null, 00:17:27.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.142 "is_configured": false, 00:17:27.142 "data_offset": 0, 00:17:27.142 "data_size": 65536 00:17:27.142 }, 00:17:27.142 { 00:17:27.142 "name": "BaseBdev2", 00:17:27.142 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:27.142 "is_configured": true, 00:17:27.142 "data_offset": 0, 00:17:27.142 "data_size": 65536 00:17:27.142 } 00:17:27.142 ] 00:17:27.142 }' 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.142 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.143 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.143 [2024-11-27 04:39:14.728236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.143 [2024-11-27 04:39:14.743994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:17:27.143 04:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.143 04:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:27.143 [2024-11-27 04:39:14.746516] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.531 "name": "raid_bdev1", 00:17:28.531 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:28.531 "strip_size_kb": 0, 00:17:28.531 "state": "online", 00:17:28.531 "raid_level": "raid1", 00:17:28.531 "superblock": false, 00:17:28.531 "num_base_bdevs": 2, 00:17:28.531 "num_base_bdevs_discovered": 2, 00:17:28.531 "num_base_bdevs_operational": 2, 00:17:28.531 "process": { 00:17:28.531 "type": "rebuild", 00:17:28.531 "target": "spare", 00:17:28.531 "progress": { 00:17:28.531 "blocks": 20480, 00:17:28.531 "percent": 31 00:17:28.531 } 00:17:28.531 }, 00:17:28.531 "base_bdevs_list": [ 00:17:28.531 { 00:17:28.531 "name": "spare", 00:17:28.531 "uuid": "9e813c87-9a3c-56d3-960b-bc709987b1ba", 00:17:28.531 "is_configured": true, 00:17:28.531 "data_offset": 0, 00:17:28.531 "data_size": 65536 00:17:28.531 }, 00:17:28.531 { 00:17:28.531 "name": "BaseBdev2", 00:17:28.531 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:28.531 "is_configured": true, 00:17:28.531 "data_offset": 0, 00:17:28.531 "data_size": 65536 00:17:28.531 } 00:17:28.531 ] 00:17:28.531 }' 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=396 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.531 "name": "raid_bdev1", 00:17:28.531 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:28.531 "strip_size_kb": 0, 00:17:28.531 "state": "online", 00:17:28.531 "raid_level": "raid1", 00:17:28.531 "superblock": false, 00:17:28.531 "num_base_bdevs": 2, 00:17:28.531 "num_base_bdevs_discovered": 2, 00:17:28.531 "num_base_bdevs_operational": 2, 00:17:28.531 "process": { 00:17:28.531 "type": "rebuild", 00:17:28.531 "target": "spare", 00:17:28.531 "progress": { 00:17:28.531 "blocks": 22528, 00:17:28.531 "percent": 34 00:17:28.531 } 00:17:28.531 }, 00:17:28.531 "base_bdevs_list": [ 00:17:28.531 { 00:17:28.531 "name": "spare", 00:17:28.531 "uuid": "9e813c87-9a3c-56d3-960b-bc709987b1ba", 00:17:28.531 "is_configured": true, 00:17:28.531 "data_offset": 0, 00:17:28.531 "data_size": 65536 00:17:28.531 }, 00:17:28.531 { 00:17:28.531 "name": "BaseBdev2", 00:17:28.531 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:28.531 "is_configured": true, 00:17:28.531 "data_offset": 0, 00:17:28.531 "data_size": 65536 00:17:28.531 } 00:17:28.531 ] 00:17:28.531 }' 00:17:28.531 04:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.531 04:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.531 04:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.531 04:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.531 04:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.907 "name": "raid_bdev1", 00:17:29.907 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:29.907 "strip_size_kb": 0, 00:17:29.907 "state": "online", 00:17:29.907 "raid_level": "raid1", 00:17:29.907 "superblock": false, 00:17:29.907 "num_base_bdevs": 2, 00:17:29.907 "num_base_bdevs_discovered": 2, 00:17:29.907 "num_base_bdevs_operational": 2, 00:17:29.907 "process": { 00:17:29.907 "type": "rebuild", 00:17:29.907 "target": "spare", 00:17:29.907 "progress": { 00:17:29.907 "blocks": 47104, 00:17:29.907 "percent": 71 00:17:29.907 } 00:17:29.907 }, 00:17:29.907 "base_bdevs_list": [ 00:17:29.907 { 00:17:29.907 "name": "spare", 00:17:29.907 "uuid": "9e813c87-9a3c-56d3-960b-bc709987b1ba", 00:17:29.907 "is_configured": true, 00:17:29.907 "data_offset": 0, 00:17:29.907 "data_size": 65536 00:17:29.907 }, 00:17:29.907 { 00:17:29.907 "name": "BaseBdev2", 00:17:29.907 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:29.907 "is_configured": true, 00:17:29.907 "data_offset": 0, 00:17:29.907 "data_size": 65536 00:17:29.907 } 00:17:29.907 ] 00:17:29.907 }' 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.907 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.908 04:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.474 [2024-11-27 04:39:17.970271] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:30.474 [2024-11-27 04:39:17.970382] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:30.474 [2024-11-27 04:39:17.970459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.732 "name": "raid_bdev1", 00:17:30.732 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:30.732 "strip_size_kb": 0, 00:17:30.732 "state": "online", 00:17:30.732 "raid_level": "raid1", 00:17:30.732 "superblock": false, 00:17:30.732 "num_base_bdevs": 2, 00:17:30.732 "num_base_bdevs_discovered": 2, 00:17:30.732 "num_base_bdevs_operational": 2, 00:17:30.732 "base_bdevs_list": [ 00:17:30.732 { 00:17:30.732 "name": "spare", 00:17:30.732 "uuid": "9e813c87-9a3c-56d3-960b-bc709987b1ba", 00:17:30.732 "is_configured": true, 00:17:30.732 "data_offset": 0, 00:17:30.732 "data_size": 65536 00:17:30.732 }, 00:17:30.732 { 00:17:30.732 "name": "BaseBdev2", 00:17:30.732 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:30.732 "is_configured": true, 00:17:30.732 "data_offset": 0, 00:17:30.732 "data_size": 65536 00:17:30.732 } 00:17:30.732 ] 00:17:30.732 }' 00:17:30.732 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.991 "name": "raid_bdev1", 00:17:30.991 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:30.991 "strip_size_kb": 0, 00:17:30.991 "state": "online", 00:17:30.991 "raid_level": "raid1", 00:17:30.991 "superblock": false, 00:17:30.991 "num_base_bdevs": 2, 00:17:30.991 "num_base_bdevs_discovered": 2, 00:17:30.991 "num_base_bdevs_operational": 2, 00:17:30.991 "base_bdevs_list": [ 00:17:30.991 { 00:17:30.991 "name": "spare", 00:17:30.991 "uuid": "9e813c87-9a3c-56d3-960b-bc709987b1ba", 00:17:30.991 "is_configured": true, 00:17:30.991 "data_offset": 0, 00:17:30.991 "data_size": 65536 00:17:30.991 }, 00:17:30.991 { 00:17:30.991 "name": "BaseBdev2", 00:17:30.991 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:30.991 "is_configured": true, 00:17:30.991 "data_offset": 0, 00:17:30.991 "data_size": 65536 00:17:30.991 } 00:17:30.991 ] 00:17:30.991 }' 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.991 04:39:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.250 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.250 "name": "raid_bdev1", 00:17:31.250 "uuid": "00f3481a-d06d-476c-ac87-5e74d87d08a0", 00:17:31.250 "strip_size_kb": 0, 00:17:31.250 "state": "online", 00:17:31.250 "raid_level": "raid1", 00:17:31.250 "superblock": false, 00:17:31.250 "num_base_bdevs": 2, 00:17:31.250 "num_base_bdevs_discovered": 2, 00:17:31.250 "num_base_bdevs_operational": 2, 00:17:31.250 "base_bdevs_list": [ 00:17:31.250 { 00:17:31.250 "name": "spare", 00:17:31.250 "uuid": "9e813c87-9a3c-56d3-960b-bc709987b1ba", 00:17:31.250 "is_configured": true, 00:17:31.250 "data_offset": 0, 00:17:31.250 "data_size": 65536 00:17:31.250 }, 00:17:31.250 { 00:17:31.250 "name": "BaseBdev2", 00:17:31.250 "uuid": "26859b29-49fe-5f3c-a71c-d2b50577e168", 00:17:31.250 "is_configured": true, 00:17:31.250 "data_offset": 0, 00:17:31.250 "data_size": 65536 00:17:31.250 } 00:17:31.250 ] 00:17:31.250 }' 00:17:31.250 04:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.250 04:39:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.508 [2024-11-27 04:39:19.070539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.508 [2024-11-27 04:39:19.070709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.508 [2024-11-27 04:39:19.070966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.508 [2024-11-27 04:39:19.071189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.508 [2024-11-27 04:39:19.071321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:31.508 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:32.072 /dev/nbd0 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.072 1+0 records in 00:17:32.072 1+0 records out 00:17:32.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285841 s, 14.3 MB/s 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.072 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:32.330 /dev/nbd1 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.330 1+0 records in 00:17:32.330 1+0 records out 00:17:32.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377681 s, 10.8 MB/s 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.330 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.331 04:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:32.331 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.331 04:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.331 04:39:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:32.588 04:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:32.588 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.588 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.588 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.589 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:32.589 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.589 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:32.847 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.847 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.847 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.847 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.847 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.847 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.847 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:32.847 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.847 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.847 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75587 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75587 ']' 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75587 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75587 00:17:33.105 killing process with pid 75587 00:17:33.105 Received shutdown signal, test time was about 60.000000 seconds 00:17:33.105 00:17:33.105 Latency(us) 00:17:33.105 [2024-11-27T04:39:20.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.105 [2024-11-27T04:39:20.728Z] =================================================================================================================== 00:17:33.105 [2024-11-27T04:39:20.728Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75587' 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75587 00:17:33.105 [2024-11-27 04:39:20.671951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.105 04:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75587 00:17:33.363 [2024-11-27 04:39:20.939205] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.738 04:39:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:34.738 ************************************ 00:17:34.738 END TEST raid_rebuild_test 00:17:34.738 ************************************ 00:17:34.738 00:17:34.738 real 0m18.957s 00:17:34.738 user 0m21.524s 00:17:34.738 sys 0m3.619s 00:17:34.738 04:39:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.738 04:39:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.738 04:39:22 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:17:34.738 04:39:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:34.738 04:39:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.738 04:39:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.738 ************************************ 00:17:34.738 START TEST raid_rebuild_test_sb 00:17:34.738 ************************************ 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:34.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76034 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76034 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:34.738 04:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76034 ']' 00:17:34.739 04:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.739 04:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.739 04:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.739 04:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.739 04:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.739 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:34.739 Zero copy mechanism will not be used. 00:17:34.739 [2024-11-27 04:39:22.173867] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:17:34.739 [2024-11-27 04:39:22.174059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76034 ] 00:17:34.739 [2024-11-27 04:39:22.354090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.997 [2024-11-27 04:39:22.484132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.256 [2024-11-27 04:39:22.687997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.256 [2024-11-27 04:39:22.688074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.514 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.514 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:35.514 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:35.514 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:35.514 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.514 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.771 BaseBdev1_malloc 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.771 [2024-11-27 04:39:23.172748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:35.771 [2024-11-27 04:39:23.172838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.771 [2024-11-27 04:39:23.172868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:35.771 [2024-11-27 04:39:23.172886] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.771 [2024-11-27 04:39:23.175679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.771 [2024-11-27 04:39:23.175728] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:35.771 BaseBdev1 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.771 BaseBdev2_malloc 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.771 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.772 [2024-11-27 04:39:23.224749] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:35.772 [2024-11-27 04:39:23.224859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.772 [2024-11-27 04:39:23.224897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:35.772 [2024-11-27 04:39:23.224921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.772 [2024-11-27 04:39:23.227857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.772 [2024-11-27 04:39:23.227907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:35.772 BaseBdev2 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.772 spare_malloc 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.772 spare_delay 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.772 [2024-11-27 04:39:23.306379] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:35.772 [2024-11-27 04:39:23.306463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.772 [2024-11-27 04:39:23.306500] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:35.772 [2024-11-27 04:39:23.306519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.772 [2024-11-27 04:39:23.309389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.772 [2024-11-27 04:39:23.309566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:35.772 spare 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.772 [2024-11-27 04:39:23.318540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.772 [2024-11-27 04:39:23.321051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.772 [2024-11-27 04:39:23.321301] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:35.772 [2024-11-27 04:39:23.321326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:35.772 [2024-11-27 04:39:23.321680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:35.772 [2024-11-27 04:39:23.321953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:35.772 [2024-11-27 04:39:23.321972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:35.772 [2024-11-27 04:39:23.322190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.772 "name": "raid_bdev1", 00:17:35.772 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:35.772 "strip_size_kb": 0, 00:17:35.772 "state": "online", 00:17:35.772 "raid_level": "raid1", 00:17:35.772 "superblock": true, 00:17:35.772 "num_base_bdevs": 2, 00:17:35.772 "num_base_bdevs_discovered": 2, 00:17:35.772 "num_base_bdevs_operational": 2, 00:17:35.772 "base_bdevs_list": [ 00:17:35.772 { 00:17:35.772 "name": "BaseBdev1", 00:17:35.772 "uuid": "543de129-2862-5a40-aa6e-bd7e368f0ff9", 00:17:35.772 "is_configured": true, 00:17:35.772 "data_offset": 2048, 00:17:35.772 "data_size": 63488 00:17:35.772 }, 00:17:35.772 { 00:17:35.772 "name": "BaseBdev2", 00:17:35.772 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:35.772 "is_configured": true, 00:17:35.772 "data_offset": 2048, 00:17:35.772 "data_size": 63488 00:17:35.772 } 00:17:35.772 ] 00:17:35.772 }' 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.772 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.339 [2024-11-27 04:39:23.851051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:36.339 04:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:36.905 [2024-11-27 04:39:24.286812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:36.905 /dev/nbd0 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:36.905 1+0 records in 00:17:36.905 1+0 records out 00:17:36.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036157 s, 11.3 MB/s 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:36.905 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.906 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:36.906 04:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:36.906 04:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:36.906 04:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:36.906 04:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:36.906 04:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:36.906 04:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:43.460 63488+0 records in 00:17:43.460 63488+0 records out 00:17:43.460 32505856 bytes (33 MB, 31 MiB) copied, 6.44057 s, 5.0 MB/s 00:17:43.460 04:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:43.460 04:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.460 04:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:43.460 04:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:43.460 04:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:43.460 04:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.460 04:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:43.718 [2024-11-27 04:39:31.082379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.718 [2024-11-27 04:39:31.114487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.718 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.718 "name": "raid_bdev1", 00:17:43.718 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:43.718 "strip_size_kb": 0, 00:17:43.718 "state": "online", 00:17:43.718 "raid_level": "raid1", 00:17:43.718 "superblock": true, 00:17:43.718 "num_base_bdevs": 2, 00:17:43.718 "num_base_bdevs_discovered": 1, 00:17:43.719 "num_base_bdevs_operational": 1, 00:17:43.719 "base_bdevs_list": [ 00:17:43.719 { 00:17:43.719 "name": null, 00:17:43.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.719 "is_configured": false, 00:17:43.719 "data_offset": 0, 00:17:43.719 "data_size": 63488 00:17:43.719 }, 00:17:43.719 { 00:17:43.719 "name": "BaseBdev2", 00:17:43.719 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:43.719 "is_configured": true, 00:17:43.719 "data_offset": 2048, 00:17:43.719 "data_size": 63488 00:17:43.719 } 00:17:43.719 ] 00:17:43.719 }' 00:17:43.719 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.719 04:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.285 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:44.285 04:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.285 04:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.285 [2024-11-27 04:39:31.630663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.285 [2024-11-27 04:39:31.647580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:17:44.285 04:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.285 04:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:44.285 [2024-11-27 04:39:31.650520] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.220 "name": "raid_bdev1", 00:17:45.220 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:45.220 "strip_size_kb": 0, 00:17:45.220 "state": "online", 00:17:45.220 "raid_level": "raid1", 00:17:45.220 "superblock": true, 00:17:45.220 "num_base_bdevs": 2, 00:17:45.220 "num_base_bdevs_discovered": 2, 00:17:45.220 "num_base_bdevs_operational": 2, 00:17:45.220 "process": { 00:17:45.220 "type": "rebuild", 00:17:45.220 "target": "spare", 00:17:45.220 "progress": { 00:17:45.220 "blocks": 20480, 00:17:45.220 "percent": 32 00:17:45.220 } 00:17:45.220 }, 00:17:45.220 "base_bdevs_list": [ 00:17:45.220 { 00:17:45.220 "name": "spare", 00:17:45.220 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:45.220 "is_configured": true, 00:17:45.220 "data_offset": 2048, 00:17:45.220 "data_size": 63488 00:17:45.220 }, 00:17:45.220 { 00:17:45.220 "name": "BaseBdev2", 00:17:45.220 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:45.220 "is_configured": true, 00:17:45.220 "data_offset": 2048, 00:17:45.220 "data_size": 63488 00:17:45.220 } 00:17:45.220 ] 00:17:45.220 }' 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.220 04:39:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.220 [2024-11-27 04:39:32.820005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.479 [2024-11-27 04:39:32.860089] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:45.479 [2024-11-27 04:39:32.860240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.479 [2024-11-27 04:39:32.860281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.479 [2024-11-27 04:39:32.860300] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.479 "name": "raid_bdev1", 00:17:45.479 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:45.479 "strip_size_kb": 0, 00:17:45.479 "state": "online", 00:17:45.479 "raid_level": "raid1", 00:17:45.479 "superblock": true, 00:17:45.479 "num_base_bdevs": 2, 00:17:45.479 "num_base_bdevs_discovered": 1, 00:17:45.479 "num_base_bdevs_operational": 1, 00:17:45.479 "base_bdevs_list": [ 00:17:45.479 { 00:17:45.479 "name": null, 00:17:45.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.479 "is_configured": false, 00:17:45.479 "data_offset": 0, 00:17:45.479 "data_size": 63488 00:17:45.479 }, 00:17:45.479 { 00:17:45.479 "name": "BaseBdev2", 00:17:45.479 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:45.479 "is_configured": true, 00:17:45.479 "data_offset": 2048, 00:17:45.479 "data_size": 63488 00:17:45.479 } 00:17:45.479 ] 00:17:45.479 }' 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.479 04:39:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.046 "name": "raid_bdev1", 00:17:46.046 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:46.046 "strip_size_kb": 0, 00:17:46.046 "state": "online", 00:17:46.046 "raid_level": "raid1", 00:17:46.046 "superblock": true, 00:17:46.046 "num_base_bdevs": 2, 00:17:46.046 "num_base_bdevs_discovered": 1, 00:17:46.046 "num_base_bdevs_operational": 1, 00:17:46.046 "base_bdevs_list": [ 00:17:46.046 { 00:17:46.046 "name": null, 00:17:46.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.046 "is_configured": false, 00:17:46.046 "data_offset": 0, 00:17:46.046 "data_size": 63488 00:17:46.046 }, 00:17:46.046 { 00:17:46.046 "name": "BaseBdev2", 00:17:46.046 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:46.046 "is_configured": true, 00:17:46.046 "data_offset": 2048, 00:17:46.046 "data_size": 63488 00:17:46.046 } 00:17:46.046 ] 00:17:46.046 }' 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.046 [2024-11-27 04:39:33.626696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.046 [2024-11-27 04:39:33.642674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.046 04:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:46.046 [2024-11-27 04:39:33.645338] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.422 "name": "raid_bdev1", 00:17:47.422 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:47.422 "strip_size_kb": 0, 00:17:47.422 "state": "online", 00:17:47.422 "raid_level": "raid1", 00:17:47.422 "superblock": true, 00:17:47.422 "num_base_bdevs": 2, 00:17:47.422 "num_base_bdevs_discovered": 2, 00:17:47.422 "num_base_bdevs_operational": 2, 00:17:47.422 "process": { 00:17:47.422 "type": "rebuild", 00:17:47.422 "target": "spare", 00:17:47.422 "progress": { 00:17:47.422 "blocks": 20480, 00:17:47.422 "percent": 32 00:17:47.422 } 00:17:47.422 }, 00:17:47.422 "base_bdevs_list": [ 00:17:47.422 { 00:17:47.422 "name": "spare", 00:17:47.422 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:47.422 "is_configured": true, 00:17:47.422 "data_offset": 2048, 00:17:47.422 "data_size": 63488 00:17:47.422 }, 00:17:47.422 { 00:17:47.422 "name": "BaseBdev2", 00:17:47.422 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:47.422 "is_configured": true, 00:17:47.422 "data_offset": 2048, 00:17:47.422 "data_size": 63488 00:17:47.422 } 00:17:47.422 ] 00:17:47.422 }' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:47.422 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=415 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.422 "name": "raid_bdev1", 00:17:47.422 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:47.422 "strip_size_kb": 0, 00:17:47.422 "state": "online", 00:17:47.422 "raid_level": "raid1", 00:17:47.422 "superblock": true, 00:17:47.422 "num_base_bdevs": 2, 00:17:47.422 "num_base_bdevs_discovered": 2, 00:17:47.422 "num_base_bdevs_operational": 2, 00:17:47.422 "process": { 00:17:47.422 "type": "rebuild", 00:17:47.422 "target": "spare", 00:17:47.422 "progress": { 00:17:47.422 "blocks": 22528, 00:17:47.422 "percent": 35 00:17:47.422 } 00:17:47.422 }, 00:17:47.422 "base_bdevs_list": [ 00:17:47.422 { 00:17:47.422 "name": "spare", 00:17:47.422 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:47.422 "is_configured": true, 00:17:47.422 "data_offset": 2048, 00:17:47.422 "data_size": 63488 00:17:47.422 }, 00:17:47.422 { 00:17:47.422 "name": "BaseBdev2", 00:17:47.422 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:47.422 "is_configured": true, 00:17:47.422 "data_offset": 2048, 00:17:47.422 "data_size": 63488 00:17:47.422 } 00:17:47.422 ] 00:17:47.422 }' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.422 04:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.357 04:39:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.616 04:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.616 "name": "raid_bdev1", 00:17:48.616 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:48.616 "strip_size_kb": 0, 00:17:48.616 "state": "online", 00:17:48.616 "raid_level": "raid1", 00:17:48.616 "superblock": true, 00:17:48.616 "num_base_bdevs": 2, 00:17:48.616 "num_base_bdevs_discovered": 2, 00:17:48.616 "num_base_bdevs_operational": 2, 00:17:48.616 "process": { 00:17:48.616 "type": "rebuild", 00:17:48.616 "target": "spare", 00:17:48.616 "progress": { 00:17:48.616 "blocks": 45056, 00:17:48.616 "percent": 70 00:17:48.616 } 00:17:48.616 }, 00:17:48.616 "base_bdevs_list": [ 00:17:48.616 { 00:17:48.616 "name": "spare", 00:17:48.616 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:48.616 "is_configured": true, 00:17:48.616 "data_offset": 2048, 00:17:48.616 "data_size": 63488 00:17:48.616 }, 00:17:48.616 { 00:17:48.616 "name": "BaseBdev2", 00:17:48.616 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:48.616 "is_configured": true, 00:17:48.616 "data_offset": 2048, 00:17:48.616 "data_size": 63488 00:17:48.616 } 00:17:48.616 ] 00:17:48.616 }' 00:17:48.616 04:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.616 04:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.616 04:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.616 04:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.616 04:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:49.184 [2024-11-27 04:39:36.768650] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:49.184 [2024-11-27 04:39:36.768755] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:49.184 [2024-11-27 04:39:36.768916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.757 "name": "raid_bdev1", 00:17:49.757 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:49.757 "strip_size_kb": 0, 00:17:49.757 "state": "online", 00:17:49.757 "raid_level": "raid1", 00:17:49.757 "superblock": true, 00:17:49.757 "num_base_bdevs": 2, 00:17:49.757 "num_base_bdevs_discovered": 2, 00:17:49.757 "num_base_bdevs_operational": 2, 00:17:49.757 "base_bdevs_list": [ 00:17:49.757 { 00:17:49.757 "name": "spare", 00:17:49.757 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:49.757 "is_configured": true, 00:17:49.757 "data_offset": 2048, 00:17:49.757 "data_size": 63488 00:17:49.757 }, 00:17:49.757 { 00:17:49.757 "name": "BaseBdev2", 00:17:49.757 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:49.757 "is_configured": true, 00:17:49.757 "data_offset": 2048, 00:17:49.757 "data_size": 63488 00:17:49.757 } 00:17:49.757 ] 00:17:49.757 }' 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.757 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.758 "name": "raid_bdev1", 00:17:49.758 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:49.758 "strip_size_kb": 0, 00:17:49.758 "state": "online", 00:17:49.758 "raid_level": "raid1", 00:17:49.758 "superblock": true, 00:17:49.758 "num_base_bdevs": 2, 00:17:49.758 "num_base_bdevs_discovered": 2, 00:17:49.758 "num_base_bdevs_operational": 2, 00:17:49.758 "base_bdevs_list": [ 00:17:49.758 { 00:17:49.758 "name": "spare", 00:17:49.758 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:49.758 "is_configured": true, 00:17:49.758 "data_offset": 2048, 00:17:49.758 "data_size": 63488 00:17:49.758 }, 00:17:49.758 { 00:17:49.758 "name": "BaseBdev2", 00:17:49.758 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:49.758 "is_configured": true, 00:17:49.758 "data_offset": 2048, 00:17:49.758 "data_size": 63488 00:17:49.758 } 00:17:49.758 ] 00:17:49.758 }' 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.758 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.017 "name": "raid_bdev1", 00:17:50.017 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:50.017 "strip_size_kb": 0, 00:17:50.017 "state": "online", 00:17:50.017 "raid_level": "raid1", 00:17:50.017 "superblock": true, 00:17:50.017 "num_base_bdevs": 2, 00:17:50.017 "num_base_bdevs_discovered": 2, 00:17:50.017 "num_base_bdevs_operational": 2, 00:17:50.017 "base_bdevs_list": [ 00:17:50.017 { 00:17:50.017 "name": "spare", 00:17:50.017 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:50.017 "is_configured": true, 00:17:50.017 "data_offset": 2048, 00:17:50.017 "data_size": 63488 00:17:50.017 }, 00:17:50.017 { 00:17:50.017 "name": "BaseBdev2", 00:17:50.017 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:50.017 "is_configured": true, 00:17:50.017 "data_offset": 2048, 00:17:50.017 "data_size": 63488 00:17:50.017 } 00:17:50.017 ] 00:17:50.017 }' 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.017 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.583 [2024-11-27 04:39:37.940943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.583 [2024-11-27 04:39:37.940986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.583 [2024-11-27 04:39:37.941087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.583 [2024-11-27 04:39:37.941182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.583 [2024-11-27 04:39:37.941200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:50.583 04:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:50.840 /dev/nbd0 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:50.840 1+0 records in 00:17:50.840 1+0 records out 00:17:50.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032538 s, 12.6 MB/s 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:50.840 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:51.098 /dev/nbd1 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:51.098 1+0 records in 00:17:51.098 1+0 records out 00:17:51.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374034 s, 11.0 MB/s 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:51.098 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:51.356 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:51.356 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:51.356 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:51.356 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:51.356 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:51.356 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:51.356 04:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:51.613 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:51.613 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:51.613 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:51.613 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:51.613 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:51.613 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:51.613 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:51.613 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:51.613 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:51.613 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.871 [2024-11-27 04:39:39.439731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.871 [2024-11-27 04:39:39.439805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.871 [2024-11-27 04:39:39.439842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:51.871 [2024-11-27 04:39:39.439858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.871 [2024-11-27 04:39:39.442718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.871 [2024-11-27 04:39:39.442914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.871 [2024-11-27 04:39:39.443048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:51.871 [2024-11-27 04:39:39.443124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.871 [2024-11-27 04:39:39.443312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.871 spare 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.871 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.128 [2024-11-27 04:39:39.543443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:52.129 [2024-11-27 04:39:39.543506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:52.129 [2024-11-27 04:39:39.543938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:17:52.129 [2024-11-27 04:39:39.544213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:52.129 [2024-11-27 04:39:39.544243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:52.129 [2024-11-27 04:39:39.544491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.129 "name": "raid_bdev1", 00:17:52.129 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:52.129 "strip_size_kb": 0, 00:17:52.129 "state": "online", 00:17:52.129 "raid_level": "raid1", 00:17:52.129 "superblock": true, 00:17:52.129 "num_base_bdevs": 2, 00:17:52.129 "num_base_bdevs_discovered": 2, 00:17:52.129 "num_base_bdevs_operational": 2, 00:17:52.129 "base_bdevs_list": [ 00:17:52.129 { 00:17:52.129 "name": "spare", 00:17:52.129 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:52.129 "is_configured": true, 00:17:52.129 "data_offset": 2048, 00:17:52.129 "data_size": 63488 00:17:52.129 }, 00:17:52.129 { 00:17:52.129 "name": "BaseBdev2", 00:17:52.129 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:52.129 "is_configured": true, 00:17:52.129 "data_offset": 2048, 00:17:52.129 "data_size": 63488 00:17:52.129 } 00:17:52.129 ] 00:17:52.129 }' 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.129 04:39:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.694 "name": "raid_bdev1", 00:17:52.694 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:52.694 "strip_size_kb": 0, 00:17:52.694 "state": "online", 00:17:52.694 "raid_level": "raid1", 00:17:52.694 "superblock": true, 00:17:52.694 "num_base_bdevs": 2, 00:17:52.694 "num_base_bdevs_discovered": 2, 00:17:52.694 "num_base_bdevs_operational": 2, 00:17:52.694 "base_bdevs_list": [ 00:17:52.694 { 00:17:52.694 "name": "spare", 00:17:52.694 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:52.694 "is_configured": true, 00:17:52.694 "data_offset": 2048, 00:17:52.694 "data_size": 63488 00:17:52.694 }, 00:17:52.694 { 00:17:52.694 "name": "BaseBdev2", 00:17:52.694 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:52.694 "is_configured": true, 00:17:52.694 "data_offset": 2048, 00:17:52.694 "data_size": 63488 00:17:52.694 } 00:17:52.694 ] 00:17:52.694 }' 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.694 [2024-11-27 04:39:40.256657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.694 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.951 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.951 "name": "raid_bdev1", 00:17:52.951 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:52.951 "strip_size_kb": 0, 00:17:52.951 "state": "online", 00:17:52.951 "raid_level": "raid1", 00:17:52.951 "superblock": true, 00:17:52.951 "num_base_bdevs": 2, 00:17:52.951 "num_base_bdevs_discovered": 1, 00:17:52.951 "num_base_bdevs_operational": 1, 00:17:52.951 "base_bdevs_list": [ 00:17:52.951 { 00:17:52.951 "name": null, 00:17:52.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.951 "is_configured": false, 00:17:52.951 "data_offset": 0, 00:17:52.951 "data_size": 63488 00:17:52.951 }, 00:17:52.951 { 00:17:52.951 "name": "BaseBdev2", 00:17:52.951 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:52.951 "is_configured": true, 00:17:52.951 "data_offset": 2048, 00:17:52.951 "data_size": 63488 00:17:52.951 } 00:17:52.951 ] 00:17:52.951 }' 00:17:52.951 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.951 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.209 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:53.209 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.209 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.209 [2024-11-27 04:39:40.764841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.209 [2024-11-27 04:39:40.765091] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:53.209 [2024-11-27 04:39:40.765120] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:53.209 [2024-11-27 04:39:40.765171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.209 [2024-11-27 04:39:40.780806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:17:53.209 04:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.209 04:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:53.209 [2024-11-27 04:39:40.783354] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.585 "name": "raid_bdev1", 00:17:54.585 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:54.585 "strip_size_kb": 0, 00:17:54.585 "state": "online", 00:17:54.585 "raid_level": "raid1", 00:17:54.585 "superblock": true, 00:17:54.585 "num_base_bdevs": 2, 00:17:54.585 "num_base_bdevs_discovered": 2, 00:17:54.585 "num_base_bdevs_operational": 2, 00:17:54.585 "process": { 00:17:54.585 "type": "rebuild", 00:17:54.585 "target": "spare", 00:17:54.585 "progress": { 00:17:54.585 "blocks": 20480, 00:17:54.585 "percent": 32 00:17:54.585 } 00:17:54.585 }, 00:17:54.585 "base_bdevs_list": [ 00:17:54.585 { 00:17:54.585 "name": "spare", 00:17:54.585 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:54.585 "is_configured": true, 00:17:54.585 "data_offset": 2048, 00:17:54.585 "data_size": 63488 00:17:54.585 }, 00:17:54.585 { 00:17:54.585 "name": "BaseBdev2", 00:17:54.585 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:54.585 "is_configured": true, 00:17:54.585 "data_offset": 2048, 00:17:54.585 "data_size": 63488 00:17:54.585 } 00:17:54.585 ] 00:17:54.585 }' 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.585 04:39:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.585 [2024-11-27 04:39:41.944353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.585 [2024-11-27 04:39:41.992011] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:54.585 [2024-11-27 04:39:41.992126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.585 [2024-11-27 04:39:41.992151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.585 [2024-11-27 04:39:41.992166] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.585 "name": "raid_bdev1", 00:17:54.585 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:54.585 "strip_size_kb": 0, 00:17:54.585 "state": "online", 00:17:54.585 "raid_level": "raid1", 00:17:54.585 "superblock": true, 00:17:54.585 "num_base_bdevs": 2, 00:17:54.585 "num_base_bdevs_discovered": 1, 00:17:54.585 "num_base_bdevs_operational": 1, 00:17:54.585 "base_bdevs_list": [ 00:17:54.585 { 00:17:54.585 "name": null, 00:17:54.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.585 "is_configured": false, 00:17:54.585 "data_offset": 0, 00:17:54.585 "data_size": 63488 00:17:54.585 }, 00:17:54.585 { 00:17:54.585 "name": "BaseBdev2", 00:17:54.585 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:54.585 "is_configured": true, 00:17:54.585 "data_offset": 2048, 00:17:54.585 "data_size": 63488 00:17:54.585 } 00:17:54.585 ] 00:17:54.585 }' 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.585 04:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.152 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:55.152 04:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.152 04:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.152 [2024-11-27 04:39:42.560020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:55.152 [2024-11-27 04:39:42.560105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.153 [2024-11-27 04:39:42.560137] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:55.153 [2024-11-27 04:39:42.560155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.153 [2024-11-27 04:39:42.560758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.153 [2024-11-27 04:39:42.560814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:55.153 [2024-11-27 04:39:42.560933] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:55.153 [2024-11-27 04:39:42.560958] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.153 [2024-11-27 04:39:42.560972] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:55.153 [2024-11-27 04:39:42.561007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:55.153 [2024-11-27 04:39:42.576449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:55.153 spare 00:17:55.153 04:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.153 04:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:55.153 [2024-11-27 04:39:42.579003] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.088 "name": "raid_bdev1", 00:17:56.088 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:56.088 "strip_size_kb": 0, 00:17:56.088 "state": "online", 00:17:56.088 "raid_level": "raid1", 00:17:56.088 "superblock": true, 00:17:56.088 "num_base_bdevs": 2, 00:17:56.088 "num_base_bdevs_discovered": 2, 00:17:56.088 "num_base_bdevs_operational": 2, 00:17:56.088 "process": { 00:17:56.088 "type": "rebuild", 00:17:56.088 "target": "spare", 00:17:56.088 "progress": { 00:17:56.088 "blocks": 20480, 00:17:56.088 "percent": 32 00:17:56.088 } 00:17:56.088 }, 00:17:56.088 "base_bdevs_list": [ 00:17:56.088 { 00:17:56.088 "name": "spare", 00:17:56.088 "uuid": "b9174890-1d4a-5105-9b56-55b422305cb3", 00:17:56.088 "is_configured": true, 00:17:56.088 "data_offset": 2048, 00:17:56.088 "data_size": 63488 00:17:56.088 }, 00:17:56.088 { 00:17:56.088 "name": "BaseBdev2", 00:17:56.088 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:56.088 "is_configured": true, 00:17:56.088 "data_offset": 2048, 00:17:56.088 "data_size": 63488 00:17:56.088 } 00:17:56.088 ] 00:17:56.088 }' 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.088 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.346 [2024-11-27 04:39:43.728541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.346 [2024-11-27 04:39:43.787799] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:56.346 [2024-11-27 04:39:43.787943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.346 [2024-11-27 04:39:43.787974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.346 [2024-11-27 04:39:43.787986] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.346 "name": "raid_bdev1", 00:17:56.346 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:56.346 "strip_size_kb": 0, 00:17:56.346 "state": "online", 00:17:56.346 "raid_level": "raid1", 00:17:56.346 "superblock": true, 00:17:56.346 "num_base_bdevs": 2, 00:17:56.346 "num_base_bdevs_discovered": 1, 00:17:56.346 "num_base_bdevs_operational": 1, 00:17:56.346 "base_bdevs_list": [ 00:17:56.346 { 00:17:56.346 "name": null, 00:17:56.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.346 "is_configured": false, 00:17:56.346 "data_offset": 0, 00:17:56.346 "data_size": 63488 00:17:56.346 }, 00:17:56.346 { 00:17:56.346 "name": "BaseBdev2", 00:17:56.346 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:56.346 "is_configured": true, 00:17:56.346 "data_offset": 2048, 00:17:56.346 "data_size": 63488 00:17:56.346 } 00:17:56.346 ] 00:17:56.346 }' 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.346 04:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.913 "name": "raid_bdev1", 00:17:56.913 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:56.913 "strip_size_kb": 0, 00:17:56.913 "state": "online", 00:17:56.913 "raid_level": "raid1", 00:17:56.913 "superblock": true, 00:17:56.913 "num_base_bdevs": 2, 00:17:56.913 "num_base_bdevs_discovered": 1, 00:17:56.913 "num_base_bdevs_operational": 1, 00:17:56.913 "base_bdevs_list": [ 00:17:56.913 { 00:17:56.913 "name": null, 00:17:56.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.913 "is_configured": false, 00:17:56.913 "data_offset": 0, 00:17:56.913 "data_size": 63488 00:17:56.913 }, 00:17:56.913 { 00:17:56.913 "name": "BaseBdev2", 00:17:56.913 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:56.913 "is_configured": true, 00:17:56.913 "data_offset": 2048, 00:17:56.913 "data_size": 63488 00:17:56.913 } 00:17:56.913 ] 00:17:56.913 }' 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.913 04:39:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.913 [2024-11-27 04:39:44.484568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:56.913 [2024-11-27 04:39:44.484645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.913 [2024-11-27 04:39:44.484700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:56.913 [2024-11-27 04:39:44.484729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.913 [2024-11-27 04:39:44.485333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.914 [2024-11-27 04:39:44.485366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.914 [2024-11-27 04:39:44.485474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:56.914 [2024-11-27 04:39:44.485495] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:56.914 [2024-11-27 04:39:44.485511] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:56.914 [2024-11-27 04:39:44.485524] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:56.914 BaseBdev1 00:17:56.914 04:39:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.914 04:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.291 "name": "raid_bdev1", 00:17:58.291 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:58.291 "strip_size_kb": 0, 00:17:58.291 "state": "online", 00:17:58.291 "raid_level": "raid1", 00:17:58.291 "superblock": true, 00:17:58.291 "num_base_bdevs": 2, 00:17:58.291 "num_base_bdevs_discovered": 1, 00:17:58.291 "num_base_bdevs_operational": 1, 00:17:58.291 "base_bdevs_list": [ 00:17:58.291 { 00:17:58.291 "name": null, 00:17:58.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.291 "is_configured": false, 00:17:58.291 "data_offset": 0, 00:17:58.291 "data_size": 63488 00:17:58.291 }, 00:17:58.291 { 00:17:58.291 "name": "BaseBdev2", 00:17:58.291 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:58.291 "is_configured": true, 00:17:58.291 "data_offset": 2048, 00:17:58.291 "data_size": 63488 00:17:58.291 } 00:17:58.291 ] 00:17:58.291 }' 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.291 04:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.550 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.550 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.550 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.550 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.550 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.550 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.550 04:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.550 04:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.550 04:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.550 04:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.550 "name": "raid_bdev1", 00:17:58.550 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:58.550 "strip_size_kb": 0, 00:17:58.550 "state": "online", 00:17:58.550 "raid_level": "raid1", 00:17:58.550 "superblock": true, 00:17:58.550 "num_base_bdevs": 2, 00:17:58.550 "num_base_bdevs_discovered": 1, 00:17:58.550 "num_base_bdevs_operational": 1, 00:17:58.550 "base_bdevs_list": [ 00:17:58.550 { 00:17:58.550 "name": null, 00:17:58.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.550 "is_configured": false, 00:17:58.550 "data_offset": 0, 00:17:58.550 "data_size": 63488 00:17:58.550 }, 00:17:58.550 { 00:17:58.550 "name": "BaseBdev2", 00:17:58.550 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:58.550 "is_configured": true, 00:17:58.550 "data_offset": 2048, 00:17:58.550 "data_size": 63488 00:17:58.550 } 00:17:58.550 ] 00:17:58.550 }' 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.550 [2024-11-27 04:39:46.153104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.550 [2024-11-27 04:39:46.153317] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:58.550 [2024-11-27 04:39:46.153352] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:58.550 request: 00:17:58.550 { 00:17:58.550 "base_bdev": "BaseBdev1", 00:17:58.550 "raid_bdev": "raid_bdev1", 00:17:58.550 "method": "bdev_raid_add_base_bdev", 00:17:58.550 "req_id": 1 00:17:58.550 } 00:17:58.550 Got JSON-RPC error response 00:17:58.550 response: 00:17:58.550 { 00:17:58.550 "code": -22, 00:17:58.550 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:58.550 } 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.550 04:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.927 "name": "raid_bdev1", 00:17:59.927 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:17:59.927 "strip_size_kb": 0, 00:17:59.927 "state": "online", 00:17:59.927 "raid_level": "raid1", 00:17:59.927 "superblock": true, 00:17:59.927 "num_base_bdevs": 2, 00:17:59.927 "num_base_bdevs_discovered": 1, 00:17:59.927 "num_base_bdevs_operational": 1, 00:17:59.927 "base_bdevs_list": [ 00:17:59.927 { 00:17:59.927 "name": null, 00:17:59.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.927 "is_configured": false, 00:17:59.927 "data_offset": 0, 00:17:59.927 "data_size": 63488 00:17:59.927 }, 00:17:59.927 { 00:17:59.927 "name": "BaseBdev2", 00:17:59.927 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:17:59.927 "is_configured": true, 00:17:59.927 "data_offset": 2048, 00:17:59.927 "data_size": 63488 00:17:59.927 } 00:17:59.927 ] 00:17:59.927 }' 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.927 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.185 "name": "raid_bdev1", 00:18:00.185 "uuid": "24e01dec-995f-4f9a-bcdf-30ad0f4f3ee2", 00:18:00.185 "strip_size_kb": 0, 00:18:00.185 "state": "online", 00:18:00.185 "raid_level": "raid1", 00:18:00.185 "superblock": true, 00:18:00.185 "num_base_bdevs": 2, 00:18:00.185 "num_base_bdevs_discovered": 1, 00:18:00.185 "num_base_bdevs_operational": 1, 00:18:00.185 "base_bdevs_list": [ 00:18:00.185 { 00:18:00.185 "name": null, 00:18:00.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.185 "is_configured": false, 00:18:00.185 "data_offset": 0, 00:18:00.185 "data_size": 63488 00:18:00.185 }, 00:18:00.185 { 00:18:00.185 "name": "BaseBdev2", 00:18:00.185 "uuid": "e4e536ce-4ca6-5505-8f76-a711bb28c02e", 00:18:00.185 "is_configured": true, 00:18:00.185 "data_offset": 2048, 00:18:00.185 "data_size": 63488 00:18:00.185 } 00:18:00.185 ] 00:18:00.185 }' 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.185 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76034 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76034 ']' 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76034 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76034 00:18:00.445 killing process with pid 76034 00:18:00.445 Received shutdown signal, test time was about 60.000000 seconds 00:18:00.445 00:18:00.445 Latency(us) 00:18:00.445 [2024-11-27T04:39:48.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.445 [2024-11-27T04:39:48.068Z] =================================================================================================================== 00:18:00.445 [2024-11-27T04:39:48.068Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76034' 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76034 00:18:00.445 [2024-11-27 04:39:47.864970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:00.445 04:39:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76034 00:18:00.445 [2024-11-27 04:39:47.865133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.445 [2024-11-27 04:39:47.865200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.445 [2024-11-27 04:39:47.865219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:00.703 [2024-11-27 04:39:48.131787] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:01.639 00:18:01.639 real 0m27.138s 00:18:01.639 user 0m33.207s 00:18:01.639 sys 0m4.006s 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.639 ************************************ 00:18:01.639 END TEST raid_rebuild_test_sb 00:18:01.639 ************************************ 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.639 04:39:49 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:18:01.639 04:39:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:01.639 04:39:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.639 04:39:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.639 ************************************ 00:18:01.639 START TEST raid_rebuild_test_io 00:18:01.639 ************************************ 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:01.639 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:01.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76804 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76804 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76804 ']' 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.640 04:39:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.896 [2024-11-27 04:39:49.353452] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:18:01.896 [2024-11-27 04:39:49.353861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:01.896 Zero copy mechanism will not be used. 00:18:01.896 -allocations --file-prefix=spdk_pid76804 ] 00:18:02.155 [2024-11-27 04:39:49.545527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.155 [2024-11-27 04:39:49.700660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.413 [2024-11-27 04:39:49.924651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.413 [2024-11-27 04:39:49.924736] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.980 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.981 BaseBdev1_malloc 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.981 [2024-11-27 04:39:50.363893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:02.981 [2024-11-27 04:39:50.363970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.981 [2024-11-27 04:39:50.364001] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:02.981 [2024-11-27 04:39:50.364020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.981 [2024-11-27 04:39:50.366765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.981 [2024-11-27 04:39:50.366836] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:02.981 BaseBdev1 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.981 BaseBdev2_malloc 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.981 [2024-11-27 04:39:50.416504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:02.981 [2024-11-27 04:39:50.416582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.981 [2024-11-27 04:39:50.416616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:02.981 [2024-11-27 04:39:50.416634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.981 [2024-11-27 04:39:50.419710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.981 [2024-11-27 04:39:50.419764] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:02.981 BaseBdev2 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.981 spare_malloc 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.981 spare_delay 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.981 [2024-11-27 04:39:50.494554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:02.981 [2024-11-27 04:39:50.494630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.981 [2024-11-27 04:39:50.494659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:02.981 [2024-11-27 04:39:50.494677] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.981 [2024-11-27 04:39:50.497434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.981 [2024-11-27 04:39:50.497487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:02.981 spare 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.981 [2024-11-27 04:39:50.502603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.981 [2024-11-27 04:39:50.505026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.981 [2024-11-27 04:39:50.505159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:02.981 [2024-11-27 04:39:50.505184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:02.981 [2024-11-27 04:39:50.505497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:02.981 [2024-11-27 04:39:50.505702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:02.981 [2024-11-27 04:39:50.505721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:02.981 [2024-11-27 04:39:50.505964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.981 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.982 "name": "raid_bdev1", 00:18:02.982 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:02.982 "strip_size_kb": 0, 00:18:02.982 "state": "online", 00:18:02.982 "raid_level": "raid1", 00:18:02.982 "superblock": false, 00:18:02.982 "num_base_bdevs": 2, 00:18:02.982 "num_base_bdevs_discovered": 2, 00:18:02.982 "num_base_bdevs_operational": 2, 00:18:02.982 "base_bdevs_list": [ 00:18:02.982 { 00:18:02.982 "name": "BaseBdev1", 00:18:02.982 "uuid": "29c3f359-4a96-5626-827e-8a6bb1220ba1", 00:18:02.982 "is_configured": true, 00:18:02.982 "data_offset": 0, 00:18:02.982 "data_size": 65536 00:18:02.982 }, 00:18:02.982 { 00:18:02.982 "name": "BaseBdev2", 00:18:02.982 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:02.982 "is_configured": true, 00:18:02.982 "data_offset": 0, 00:18:02.982 "data_size": 65536 00:18:02.982 } 00:18:02.982 ] 00:18:02.982 }' 00:18:02.982 04:39:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.982 04:39:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.548 [2024-11-27 04:39:51.051131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:03.548 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.549 [2024-11-27 04:39:51.162763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.549 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.808 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.808 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.808 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.808 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.808 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.808 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.808 "name": "raid_bdev1", 00:18:03.808 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:03.808 "strip_size_kb": 0, 00:18:03.808 "state": "online", 00:18:03.808 "raid_level": "raid1", 00:18:03.808 "superblock": false, 00:18:03.808 "num_base_bdevs": 2, 00:18:03.808 "num_base_bdevs_discovered": 1, 00:18:03.808 "num_base_bdevs_operational": 1, 00:18:03.808 "base_bdevs_list": [ 00:18:03.808 { 00:18:03.808 "name": null, 00:18:03.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.808 "is_configured": false, 00:18:03.808 "data_offset": 0, 00:18:03.808 "data_size": 65536 00:18:03.808 }, 00:18:03.808 { 00:18:03.808 "name": "BaseBdev2", 00:18:03.808 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:03.808 "is_configured": true, 00:18:03.808 "data_offset": 0, 00:18:03.808 "data_size": 65536 00:18:03.808 } 00:18:03.808 ] 00:18:03.808 }' 00:18:03.808 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.808 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.808 [2024-11-27 04:39:51.298812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:03.808 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:03.808 Zero copy mechanism will not be used. 00:18:03.808 Running I/O for 60 seconds... 00:18:04.067 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.067 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.067 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.067 [2024-11-27 04:39:51.677278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.325 04:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.326 04:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:04.326 [2024-11-27 04:39:51.755574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:04.326 [2024-11-27 04:39:51.758315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.326 [2024-11-27 04:39:51.875292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:04.326 [2024-11-27 04:39:51.876015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:04.585 [2024-11-27 04:39:52.086876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:04.585 [2024-11-27 04:39:52.087488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:04.843 117.00 IOPS, 351.00 MiB/s [2024-11-27T04:39:52.466Z] [2024-11-27 04:39:52.353182] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:05.102 [2024-11-27 04:39:52.568315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:05.102 [2024-11-27 04:39:52.568693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:05.360 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.361 "name": "raid_bdev1", 00:18:05.361 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:05.361 "strip_size_kb": 0, 00:18:05.361 "state": "online", 00:18:05.361 "raid_level": "raid1", 00:18:05.361 "superblock": false, 00:18:05.361 "num_base_bdevs": 2, 00:18:05.361 "num_base_bdevs_discovered": 2, 00:18:05.361 "num_base_bdevs_operational": 2, 00:18:05.361 "process": { 00:18:05.361 "type": "rebuild", 00:18:05.361 "target": "spare", 00:18:05.361 "progress": { 00:18:05.361 "blocks": 10240, 00:18:05.361 "percent": 15 00:18:05.361 } 00:18:05.361 }, 00:18:05.361 "base_bdevs_list": [ 00:18:05.361 { 00:18:05.361 "name": "spare", 00:18:05.361 "uuid": "0cd65239-75f7-58b1-b27e-8f8451d78663", 00:18:05.361 "is_configured": true, 00:18:05.361 "data_offset": 0, 00:18:05.361 "data_size": 65536 00:18:05.361 }, 00:18:05.361 { 00:18:05.361 "name": "BaseBdev2", 00:18:05.361 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:05.361 "is_configured": true, 00:18:05.361 "data_offset": 0, 00:18:05.361 "data_size": 65536 00:18:05.361 } 00:18:05.361 ] 00:18:05.361 }' 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.361 04:39:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.361 [2024-11-27 04:39:52.880270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.361 [2024-11-27 04:39:52.912262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:05.620 [2024-11-27 04:39:53.027956] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:05.620 [2024-11-27 04:39:53.038159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.620 [2024-11-27 04:39:53.038205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.620 [2024-11-27 04:39:53.038226] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:05.620 [2024-11-27 04:39:53.074151] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.620 "name": "raid_bdev1", 00:18:05.620 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:05.620 "strip_size_kb": 0, 00:18:05.620 "state": "online", 00:18:05.620 "raid_level": "raid1", 00:18:05.620 "superblock": false, 00:18:05.620 "num_base_bdevs": 2, 00:18:05.620 "num_base_bdevs_discovered": 1, 00:18:05.620 "num_base_bdevs_operational": 1, 00:18:05.620 "base_bdevs_list": [ 00:18:05.620 { 00:18:05.620 "name": null, 00:18:05.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.620 "is_configured": false, 00:18:05.620 "data_offset": 0, 00:18:05.620 "data_size": 65536 00:18:05.620 }, 00:18:05.620 { 00:18:05.620 "name": "BaseBdev2", 00:18:05.620 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:05.620 "is_configured": true, 00:18:05.620 "data_offset": 0, 00:18:05.620 "data_size": 65536 00:18:05.620 } 00:18:05.620 ] 00:18:05.620 }' 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.620 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.137 130.50 IOPS, 391.50 MiB/s [2024-11-27T04:39:53.760Z] 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.137 "name": "raid_bdev1", 00:18:06.137 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:06.137 "strip_size_kb": 0, 00:18:06.137 "state": "online", 00:18:06.137 "raid_level": "raid1", 00:18:06.137 "superblock": false, 00:18:06.137 "num_base_bdevs": 2, 00:18:06.137 "num_base_bdevs_discovered": 1, 00:18:06.137 "num_base_bdevs_operational": 1, 00:18:06.137 "base_bdevs_list": [ 00:18:06.137 { 00:18:06.137 "name": null, 00:18:06.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.137 "is_configured": false, 00:18:06.137 "data_offset": 0, 00:18:06.137 "data_size": 65536 00:18:06.137 }, 00:18:06.137 { 00:18:06.137 "name": "BaseBdev2", 00:18:06.137 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:06.137 "is_configured": true, 00:18:06.137 "data_offset": 0, 00:18:06.137 "data_size": 65536 00:18:06.137 } 00:18:06.137 ] 00:18:06.137 }' 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.137 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.137 [2024-11-27 04:39:53.755862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.396 04:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.396 04:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:06.396 [2024-11-27 04:39:53.834333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:06.396 [2024-11-27 04:39:53.837067] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.396 [2024-11-27 04:39:53.946217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:06.396 [2024-11-27 04:39:53.946922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:06.655 [2024-11-27 04:39:54.073878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:06.655 [2024-11-27 04:39:54.074302] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:06.913 140.67 IOPS, 422.00 MiB/s [2024-11-27T04:39:54.536Z] [2024-11-27 04:39:54.407422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:06.913 [2024-11-27 04:39:54.408145] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:07.172 [2024-11-27 04:39:54.643112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.431 "name": "raid_bdev1", 00:18:07.431 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:07.431 "strip_size_kb": 0, 00:18:07.431 "state": "online", 00:18:07.431 "raid_level": "raid1", 00:18:07.431 "superblock": false, 00:18:07.431 "num_base_bdevs": 2, 00:18:07.431 "num_base_bdevs_discovered": 2, 00:18:07.431 "num_base_bdevs_operational": 2, 00:18:07.431 "process": { 00:18:07.431 "type": "rebuild", 00:18:07.431 "target": "spare", 00:18:07.431 "progress": { 00:18:07.431 "blocks": 10240, 00:18:07.431 "percent": 15 00:18:07.431 } 00:18:07.431 }, 00:18:07.431 "base_bdevs_list": [ 00:18:07.431 { 00:18:07.431 "name": "spare", 00:18:07.431 "uuid": "0cd65239-75f7-58b1-b27e-8f8451d78663", 00:18:07.431 "is_configured": true, 00:18:07.431 "data_offset": 0, 00:18:07.431 "data_size": 65536 00:18:07.431 }, 00:18:07.431 { 00:18:07.431 "name": "BaseBdev2", 00:18:07.431 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:07.431 "is_configured": true, 00:18:07.431 "data_offset": 0, 00:18:07.431 "data_size": 65536 00:18:07.431 } 00:18:07.431 ] 00:18:07.431 }' 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=435 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.431 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.432 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.432 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.432 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.432 04:39:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.432 04:39:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.432 04:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.432 [2024-11-27 04:39:54.977062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:07.432 04:39:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.432 04:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.432 "name": "raid_bdev1", 00:18:07.432 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:07.432 "strip_size_kb": 0, 00:18:07.432 "state": "online", 00:18:07.432 "raid_level": "raid1", 00:18:07.432 "superblock": false, 00:18:07.432 "num_base_bdevs": 2, 00:18:07.432 "num_base_bdevs_discovered": 2, 00:18:07.432 "num_base_bdevs_operational": 2, 00:18:07.432 "process": { 00:18:07.432 "type": "rebuild", 00:18:07.432 "target": "spare", 00:18:07.432 "progress": { 00:18:07.432 "blocks": 12288, 00:18:07.432 "percent": 18 00:18:07.432 } 00:18:07.432 }, 00:18:07.432 "base_bdevs_list": [ 00:18:07.432 { 00:18:07.432 "name": "spare", 00:18:07.432 "uuid": "0cd65239-75f7-58b1-b27e-8f8451d78663", 00:18:07.432 "is_configured": true, 00:18:07.432 "data_offset": 0, 00:18:07.432 "data_size": 65536 00:18:07.432 }, 00:18:07.432 { 00:18:07.432 "name": "BaseBdev2", 00:18:07.432 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:07.432 "is_configured": true, 00:18:07.432 "data_offset": 0, 00:18:07.432 "data_size": 65536 00:18:07.432 } 00:18:07.432 ] 00:18:07.432 }' 00:18:07.432 04:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.691 04:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.691 04:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.691 04:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.691 04:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:07.691 [2024-11-27 04:39:55.204968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:07.691 [2024-11-27 04:39:55.205557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:07.949 121.00 IOPS, 363.00 MiB/s [2024-11-27T04:39:55.572Z] [2024-11-27 04:39:55.531546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:08.207 [2024-11-27 04:39:55.640897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:08.465 [2024-11-27 04:39:55.988052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.723 "name": "raid_bdev1", 00:18:08.723 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:08.723 "strip_size_kb": 0, 00:18:08.723 "state": "online", 00:18:08.723 "raid_level": "raid1", 00:18:08.723 "superblock": false, 00:18:08.723 "num_base_bdevs": 2, 00:18:08.723 "num_base_bdevs_discovered": 2, 00:18:08.723 "num_base_bdevs_operational": 2, 00:18:08.723 "process": { 00:18:08.723 "type": "rebuild", 00:18:08.723 "target": "spare", 00:18:08.723 "progress": { 00:18:08.723 "blocks": 26624, 00:18:08.723 "percent": 40 00:18:08.723 } 00:18:08.723 }, 00:18:08.723 "base_bdevs_list": [ 00:18:08.723 { 00:18:08.723 "name": "spare", 00:18:08.723 "uuid": "0cd65239-75f7-58b1-b27e-8f8451d78663", 00:18:08.723 "is_configured": true, 00:18:08.723 "data_offset": 0, 00:18:08.723 "data_size": 65536 00:18:08.723 }, 00:18:08.723 { 00:18:08.723 "name": "BaseBdev2", 00:18:08.723 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:08.723 "is_configured": true, 00:18:08.723 "data_offset": 0, 00:18:08.723 "data_size": 65536 00:18:08.723 } 00:18:08.723 ] 00:18:08.723 }' 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.723 [2024-11-27 04:39:56.190505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.723 04:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:09.288 109.20 IOPS, 327.60 MiB/s [2024-11-27T04:39:56.912Z] [2024-11-27 04:39:56.898374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:09.892 [2024-11-27 04:39:57.217746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.892 96.67 IOPS, 290.00 MiB/s [2024-11-27T04:39:57.515Z] 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.892 "name": "raid_bdev1", 00:18:09.892 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:09.892 "strip_size_kb": 0, 00:18:09.892 "state": "online", 00:18:09.892 "raid_level": "raid1", 00:18:09.892 "superblock": false, 00:18:09.892 "num_base_bdevs": 2, 00:18:09.892 "num_base_bdevs_discovered": 2, 00:18:09.892 "num_base_bdevs_operational": 2, 00:18:09.892 "process": { 00:18:09.892 "type": "rebuild", 00:18:09.892 "target": "spare", 00:18:09.892 "progress": { 00:18:09.892 "blocks": 45056, 00:18:09.892 "percent": 68 00:18:09.892 } 00:18:09.892 }, 00:18:09.892 "base_bdevs_list": [ 00:18:09.892 { 00:18:09.892 "name": "spare", 00:18:09.892 "uuid": "0cd65239-75f7-58b1-b27e-8f8451d78663", 00:18:09.892 "is_configured": true, 00:18:09.892 "data_offset": 0, 00:18:09.892 "data_size": 65536 00:18:09.892 }, 00:18:09.892 { 00:18:09.892 "name": "BaseBdev2", 00:18:09.892 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:09.892 "is_configured": true, 00:18:09.892 "data_offset": 0, 00:18:09.892 "data_size": 65536 00:18:09.892 } 00:18:09.892 ] 00:18:09.892 }' 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.892 04:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:10.150 [2024-11-27 04:39:57.559367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:18:10.973 88.29 IOPS, 264.86 MiB/s [2024-11-27T04:39:58.596Z] 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.973 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.973 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.973 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.973 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.973 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.973 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.973 04:39:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.973 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.973 04:39:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.973 [2024-11-27 04:39:58.454804] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:10.974 04:39:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.974 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.974 "name": "raid_bdev1", 00:18:10.974 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:10.974 "strip_size_kb": 0, 00:18:10.974 "state": "online", 00:18:10.974 "raid_level": "raid1", 00:18:10.974 "superblock": false, 00:18:10.974 "num_base_bdevs": 2, 00:18:10.974 "num_base_bdevs_discovered": 2, 00:18:10.974 "num_base_bdevs_operational": 2, 00:18:10.974 "process": { 00:18:10.974 "type": "rebuild", 00:18:10.974 "target": "spare", 00:18:10.974 "progress": { 00:18:10.974 "blocks": 63488, 00:18:10.974 "percent": 96 00:18:10.974 } 00:18:10.974 }, 00:18:10.974 "base_bdevs_list": [ 00:18:10.974 { 00:18:10.974 "name": "spare", 00:18:10.974 "uuid": "0cd65239-75f7-58b1-b27e-8f8451d78663", 00:18:10.974 "is_configured": true, 00:18:10.974 "data_offset": 0, 00:18:10.974 "data_size": 65536 00:18:10.974 }, 00:18:10.974 { 00:18:10.974 "name": "BaseBdev2", 00:18:10.974 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:10.974 "is_configured": true, 00:18:10.974 "data_offset": 0, 00:18:10.974 "data_size": 65536 00:18:10.974 } 00:18:10.974 ] 00:18:10.974 }' 00:18:10.974 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.974 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.974 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.974 [2024-11-27 04:39:58.554529] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:10.974 [2024-11-27 04:39:58.565386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.974 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.974 04:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.162 81.50 IOPS, 244.50 MiB/s [2024-11-27T04:39:59.785Z] 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.162 "name": "raid_bdev1", 00:18:12.162 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:12.162 "strip_size_kb": 0, 00:18:12.162 "state": "online", 00:18:12.162 "raid_level": "raid1", 00:18:12.162 "superblock": false, 00:18:12.162 "num_base_bdevs": 2, 00:18:12.162 "num_base_bdevs_discovered": 2, 00:18:12.162 "num_base_bdevs_operational": 2, 00:18:12.162 "base_bdevs_list": [ 00:18:12.162 { 00:18:12.162 "name": "spare", 00:18:12.162 "uuid": "0cd65239-75f7-58b1-b27e-8f8451d78663", 00:18:12.162 "is_configured": true, 00:18:12.162 "data_offset": 0, 00:18:12.162 "data_size": 65536 00:18:12.162 }, 00:18:12.162 { 00:18:12.162 "name": "BaseBdev2", 00:18:12.162 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:12.162 "is_configured": true, 00:18:12.162 "data_offset": 0, 00:18:12.162 "data_size": 65536 00:18:12.162 } 00:18:12.162 ] 00:18:12.162 }' 00:18:12.162 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.163 04:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.421 "name": "raid_bdev1", 00:18:12.421 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:12.421 "strip_size_kb": 0, 00:18:12.421 "state": "online", 00:18:12.421 "raid_level": "raid1", 00:18:12.421 "superblock": false, 00:18:12.421 "num_base_bdevs": 2, 00:18:12.421 "num_base_bdevs_discovered": 2, 00:18:12.421 "num_base_bdevs_operational": 2, 00:18:12.421 "base_bdevs_list": [ 00:18:12.421 { 00:18:12.421 "name": "spare", 00:18:12.421 "uuid": "0cd65239-75f7-58b1-b27e-8f8451d78663", 00:18:12.421 "is_configured": true, 00:18:12.421 "data_offset": 0, 00:18:12.421 "data_size": 65536 00:18:12.421 }, 00:18:12.421 { 00:18:12.421 "name": "BaseBdev2", 00:18:12.421 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:12.421 "is_configured": true, 00:18:12.421 "data_offset": 0, 00:18:12.421 "data_size": 65536 00:18:12.421 } 00:18:12.421 ] 00:18:12.421 }' 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.421 "name": "raid_bdev1", 00:18:12.421 "uuid": "70135dd9-a19b-465f-bfec-ef3060cda1d8", 00:18:12.421 "strip_size_kb": 0, 00:18:12.421 "state": "online", 00:18:12.421 "raid_level": "raid1", 00:18:12.421 "superblock": false, 00:18:12.421 "num_base_bdevs": 2, 00:18:12.421 "num_base_bdevs_discovered": 2, 00:18:12.421 "num_base_bdevs_operational": 2, 00:18:12.421 "base_bdevs_list": [ 00:18:12.421 { 00:18:12.421 "name": "spare", 00:18:12.421 "uuid": "0cd65239-75f7-58b1-b27e-8f8451d78663", 00:18:12.421 "is_configured": true, 00:18:12.421 "data_offset": 0, 00:18:12.421 "data_size": 65536 00:18:12.421 }, 00:18:12.421 { 00:18:12.421 "name": "BaseBdev2", 00:18:12.421 "uuid": "abea0784-177f-5265-83c1-01ea55924bf5", 00:18:12.421 "is_configured": true, 00:18:12.421 "data_offset": 0, 00:18:12.421 "data_size": 65536 00:18:12.421 } 00:18:12.421 ] 00:18:12.421 }' 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.421 04:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.987 77.11 IOPS, 231.33 MiB/s [2024-11-27T04:40:00.610Z] 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:12.987 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.987 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.987 [2024-11-27 04:40:00.444935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.987 [2024-11-27 04:40:00.445111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.987 00:18:12.987 Latency(us) 00:18:12.987 [2024-11-27T04:40:00.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.987 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:12.987 raid_bdev1 : 9.24 76.01 228.03 0.00 0.00 17857.73 299.75 111530.36 00:18:12.987 [2024-11-27T04:40:00.610Z] =================================================================================================================== 00:18:12.987 [2024-11-27T04:40:00.610Z] Total : 76.01 228.03 0.00 0.00 17857.73 299.75 111530.36 00:18:12.987 { 00:18:12.987 "results": [ 00:18:12.987 { 00:18:12.987 "job": "raid_bdev1", 00:18:12.987 "core_mask": "0x1", 00:18:12.987 "workload": "randrw", 00:18:12.987 "percentage": 50, 00:18:12.987 "status": "finished", 00:18:12.987 "queue_depth": 2, 00:18:12.987 "io_size": 3145728, 00:18:12.987 "runtime": 9.235487, 00:18:12.987 "iops": 76.01115133397947, 00:18:12.987 "mibps": 228.03345400193842, 00:18:12.987 "io_failed": 0, 00:18:12.987 "io_timeout": 0, 00:18:12.987 "avg_latency_us": 17857.73385133385, 00:18:12.987 "min_latency_us": 299.75272727272727, 00:18:12.987 "max_latency_us": 111530.35636363637 00:18:12.987 } 00:18:12.987 ], 00:18:12.987 "core_count": 1 00:18:12.987 } 00:18:12.988 [2024-11-27 04:40:00.556968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.988 [2024-11-27 04:40:00.557060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.988 [2024-11-27 04:40:00.557167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.988 [2024-11-27 04:40:00.557196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:12.988 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.988 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.988 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:12.988 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.988 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.988 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.246 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:13.517 /dev/nbd0 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.517 1+0 records in 00:18:13.517 1+0 records out 00:18:13.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511909 s, 8.0 MB/s 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.517 04:40:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:13.782 /dev/nbd1 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.782 1+0 records in 00:18:13.782 1+0 records out 00:18:13.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352439 s, 11.6 MB/s 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.782 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:14.041 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:14.041 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:14.041 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:14.041 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:14.041 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:14.041 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.041 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.299 04:40:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76804 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76804 ']' 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76804 00:18:14.556 04:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:18:14.814 04:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.814 04:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76804 00:18:14.814 killing process with pid 76804 00:18:14.814 Received shutdown signal, test time was about 10.901079 seconds 00:18:14.814 00:18:14.814 Latency(us) 00:18:14.814 [2024-11-27T04:40:02.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.814 [2024-11-27T04:40:02.437Z] =================================================================================================================== 00:18:14.814 [2024-11-27T04:40:02.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.814 04:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.814 04:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.814 04:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76804' 00:18:14.814 04:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76804 00:18:14.814 [2024-11-27 04:40:02.202476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.814 04:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76804 00:18:14.814 [2024-11-27 04:40:02.413689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.187 04:40:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:16.187 00:18:16.187 real 0m14.302s 00:18:16.187 user 0m18.557s 00:18:16.187 sys 0m1.437s 00:18:16.187 ************************************ 00:18:16.187 END TEST raid_rebuild_test_io 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.188 ************************************ 00:18:16.188 04:40:03 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:18:16.188 04:40:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:16.188 04:40:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.188 04:40:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.188 ************************************ 00:18:16.188 START TEST raid_rebuild_test_sb_io 00:18:16.188 ************************************ 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:16.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77210 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77210 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77210 ']' 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.188 04:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.188 [2024-11-27 04:40:03.696481] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:18:16.188 [2024-11-27 04:40:03.696843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77210 ] 00:18:16.188 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:16.188 Zero copy mechanism will not be used. 00:18:16.446 [2024-11-27 04:40:03.873997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.446 [2024-11-27 04:40:04.007146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.702 [2024-11-27 04:40:04.212001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.702 [2024-11-27 04:40:04.212048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 BaseBdev1_malloc 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 [2024-11-27 04:40:04.699487] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:17.269 [2024-11-27 04:40:04.699710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.269 [2024-11-27 04:40:04.699754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:17.269 [2024-11-27 04:40:04.699794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.269 [2024-11-27 04:40:04.702588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.269 [2024-11-27 04:40:04.702640] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:17.269 BaseBdev1 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 BaseBdev2_malloc 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 [2024-11-27 04:40:04.756280] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:17.269 [2024-11-27 04:40:04.756506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.269 [2024-11-27 04:40:04.756585] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:17.269 [2024-11-27 04:40:04.756717] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.269 [2024-11-27 04:40:04.759550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.269 [2024-11-27 04:40:04.759602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:17.269 BaseBdev2 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 spare_malloc 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 spare_delay 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 [2024-11-27 04:40:04.827510] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:17.269 [2024-11-27 04:40:04.827590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.269 [2024-11-27 04:40:04.827621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:17.269 [2024-11-27 04:40:04.827639] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.269 [2024-11-27 04:40:04.830604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.269 [2024-11-27 04:40:04.830661] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:17.269 spare 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.269 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.270 [2024-11-27 04:40:04.835675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.270 [2024-11-27 04:40:04.838194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.270 [2024-11-27 04:40:04.838438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:17.270 [2024-11-27 04:40:04.838464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:17.270 [2024-11-27 04:40:04.838791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:17.270 [2024-11-27 04:40:04.839022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:17.270 [2024-11-27 04:40:04.839045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:17.270 [2024-11-27 04:40:04.839236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.270 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.527 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.527 "name": "raid_bdev1", 00:18:17.527 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:17.527 "strip_size_kb": 0, 00:18:17.527 "state": "online", 00:18:17.527 "raid_level": "raid1", 00:18:17.527 "superblock": true, 00:18:17.527 "num_base_bdevs": 2, 00:18:17.527 "num_base_bdevs_discovered": 2, 00:18:17.527 "num_base_bdevs_operational": 2, 00:18:17.527 "base_bdevs_list": [ 00:18:17.527 { 00:18:17.527 "name": "BaseBdev1", 00:18:17.527 "uuid": "5f136f52-7c6e-52c8-bfac-d5831cdadfd5", 00:18:17.527 "is_configured": true, 00:18:17.527 "data_offset": 2048, 00:18:17.527 "data_size": 63488 00:18:17.527 }, 00:18:17.527 { 00:18:17.527 "name": "BaseBdev2", 00:18:17.527 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:17.527 "is_configured": true, 00:18:17.527 "data_offset": 2048, 00:18:17.527 "data_size": 63488 00:18:17.527 } 00:18:17.527 ] 00:18:17.527 }' 00:18:17.527 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.527 04:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.785 [2024-11-27 04:40:05.308167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.785 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.042 [2024-11-27 04:40:05.407802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.042 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.043 "name": "raid_bdev1", 00:18:18.043 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:18.043 "strip_size_kb": 0, 00:18:18.043 "state": "online", 00:18:18.043 "raid_level": "raid1", 00:18:18.043 "superblock": true, 00:18:18.043 "num_base_bdevs": 2, 00:18:18.043 "num_base_bdevs_discovered": 1, 00:18:18.043 "num_base_bdevs_operational": 1, 00:18:18.043 "base_bdevs_list": [ 00:18:18.043 { 00:18:18.043 "name": null, 00:18:18.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.043 "is_configured": false, 00:18:18.043 "data_offset": 0, 00:18:18.043 "data_size": 63488 00:18:18.043 }, 00:18:18.043 { 00:18:18.043 "name": "BaseBdev2", 00:18:18.043 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:18.043 "is_configured": true, 00:18:18.043 "data_offset": 2048, 00:18:18.043 "data_size": 63488 00:18:18.043 } 00:18:18.043 ] 00:18:18.043 }' 00:18:18.043 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.043 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.043 [2024-11-27 04:40:05.531792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:18.043 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:18.043 Zero copy mechanism will not be used. 00:18:18.043 Running I/O for 60 seconds... 00:18:18.609 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:18.609 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.609 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.609 [2024-11-27 04:40:05.949031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.609 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.609 04:40:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:18.609 [2024-11-27 04:40:06.034384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:18.609 [2024-11-27 04:40:06.037162] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.609 [2024-11-27 04:40:06.160068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:18.867 [2024-11-27 04:40:06.394727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:18.867 [2024-11-27 04:40:06.395157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:19.384 207.00 IOPS, 621.00 MiB/s [2024-11-27T04:40:07.007Z] [2024-11-27 04:40:06.854664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:19.384 [2024-11-27 04:40:06.855112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.643 "name": "raid_bdev1", 00:18:19.643 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:19.643 "strip_size_kb": 0, 00:18:19.643 "state": "online", 00:18:19.643 "raid_level": "raid1", 00:18:19.643 "superblock": true, 00:18:19.643 "num_base_bdevs": 2, 00:18:19.643 "num_base_bdevs_discovered": 2, 00:18:19.643 "num_base_bdevs_operational": 2, 00:18:19.643 "process": { 00:18:19.643 "type": "rebuild", 00:18:19.643 "target": "spare", 00:18:19.643 "progress": { 00:18:19.643 "blocks": 10240, 00:18:19.643 "percent": 16 00:18:19.643 } 00:18:19.643 }, 00:18:19.643 "base_bdevs_list": [ 00:18:19.643 { 00:18:19.643 "name": "spare", 00:18:19.643 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:19.643 "is_configured": true, 00:18:19.643 "data_offset": 2048, 00:18:19.643 "data_size": 63488 00:18:19.643 }, 00:18:19.643 { 00:18:19.643 "name": "BaseBdev2", 00:18:19.643 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:19.643 "is_configured": true, 00:18:19.643 "data_offset": 2048, 00:18:19.643 "data_size": 63488 00:18:19.643 } 00:18:19.643 ] 00:18:19.643 }' 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.643 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.643 [2024-11-27 04:40:07.162506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.643 [2024-11-27 04:40:07.211962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:19.901 [2024-11-27 04:40:07.337203] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:19.901 [2024-11-27 04:40:07.340305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.901 [2024-11-27 04:40:07.340497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.901 [2024-11-27 04:40:07.340563] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:19.901 [2024-11-27 04:40:07.377002] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.901 "name": "raid_bdev1", 00:18:19.901 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:19.901 "strip_size_kb": 0, 00:18:19.901 "state": "online", 00:18:19.901 "raid_level": "raid1", 00:18:19.901 "superblock": true, 00:18:19.901 "num_base_bdevs": 2, 00:18:19.901 "num_base_bdevs_discovered": 1, 00:18:19.901 "num_base_bdevs_operational": 1, 00:18:19.901 "base_bdevs_list": [ 00:18:19.901 { 00:18:19.901 "name": null, 00:18:19.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.901 "is_configured": false, 00:18:19.901 "data_offset": 0, 00:18:19.901 "data_size": 63488 00:18:19.901 }, 00:18:19.901 { 00:18:19.901 "name": "BaseBdev2", 00:18:19.901 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:19.901 "is_configured": true, 00:18:19.901 "data_offset": 2048, 00:18:19.901 "data_size": 63488 00:18:19.901 } 00:18:19.901 ] 00:18:19.901 }' 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.901 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.418 140.50 IOPS, 421.50 MiB/s [2024-11-27T04:40:08.041Z] 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.418 "name": "raid_bdev1", 00:18:20.418 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:20.418 "strip_size_kb": 0, 00:18:20.418 "state": "online", 00:18:20.418 "raid_level": "raid1", 00:18:20.418 "superblock": true, 00:18:20.418 "num_base_bdevs": 2, 00:18:20.418 "num_base_bdevs_discovered": 1, 00:18:20.418 "num_base_bdevs_operational": 1, 00:18:20.418 "base_bdevs_list": [ 00:18:20.418 { 00:18:20.418 "name": null, 00:18:20.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.418 "is_configured": false, 00:18:20.418 "data_offset": 0, 00:18:20.418 "data_size": 63488 00:18:20.418 }, 00:18:20.418 { 00:18:20.418 "name": "BaseBdev2", 00:18:20.418 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:20.418 "is_configured": true, 00:18:20.418 "data_offset": 2048, 00:18:20.418 "data_size": 63488 00:18:20.418 } 00:18:20.418 ] 00:18:20.418 }' 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.418 04:40:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.418 04:40:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.418 04:40:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.418 04:40:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.418 04:40:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.677 [2024-11-27 04:40:08.047970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.677 04:40:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.677 04:40:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:20.677 [2024-11-27 04:40:08.093274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:20.677 [2024-11-27 04:40:08.095991] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.677 [2024-11-27 04:40:08.242590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:20.935 [2024-11-27 04:40:08.353667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:20.936 [2024-11-27 04:40:08.354329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:21.195 160.67 IOPS, 482.00 MiB/s [2024-11-27T04:40:08.818Z] [2024-11-27 04:40:08.717617] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:21.453 [2024-11-27 04:40:08.955138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.712 "name": "raid_bdev1", 00:18:21.712 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:21.712 "strip_size_kb": 0, 00:18:21.712 "state": "online", 00:18:21.712 "raid_level": "raid1", 00:18:21.712 "superblock": true, 00:18:21.712 "num_base_bdevs": 2, 00:18:21.712 "num_base_bdevs_discovered": 2, 00:18:21.712 "num_base_bdevs_operational": 2, 00:18:21.712 "process": { 00:18:21.712 "type": "rebuild", 00:18:21.712 "target": "spare", 00:18:21.712 "progress": { 00:18:21.712 "blocks": 12288, 00:18:21.712 "percent": 19 00:18:21.712 } 00:18:21.712 }, 00:18:21.712 "base_bdevs_list": [ 00:18:21.712 { 00:18:21.712 "name": "spare", 00:18:21.712 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:21.712 "is_configured": true, 00:18:21.712 "data_offset": 2048, 00:18:21.712 "data_size": 63488 00:18:21.712 }, 00:18:21.712 { 00:18:21.712 "name": "BaseBdev2", 00:18:21.712 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:21.712 "is_configured": true, 00:18:21.712 "data_offset": 2048, 00:18:21.712 "data_size": 63488 00:18:21.712 } 00:18:21.712 ] 00:18:21.712 }' 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.712 [2024-11-27 04:40:09.186183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:21.712 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=450 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.712 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.713 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.713 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.713 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.713 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.713 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.713 "name": "raid_bdev1", 00:18:21.713 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:21.713 "strip_size_kb": 0, 00:18:21.713 "state": "online", 00:18:21.713 "raid_level": "raid1", 00:18:21.713 "superblock": true, 00:18:21.713 "num_base_bdevs": 2, 00:18:21.713 "num_base_bdevs_discovered": 2, 00:18:21.713 "num_base_bdevs_operational": 2, 00:18:21.713 "process": { 00:18:21.713 "type": "rebuild", 00:18:21.713 "target": "spare", 00:18:21.713 "progress": { 00:18:21.713 "blocks": 14336, 00:18:21.713 "percent": 22 00:18:21.713 } 00:18:21.713 }, 00:18:21.713 "base_bdevs_list": [ 00:18:21.713 { 00:18:21.713 "name": "spare", 00:18:21.713 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:21.713 "is_configured": true, 00:18:21.713 "data_offset": 2048, 00:18:21.713 "data_size": 63488 00:18:21.713 }, 00:18:21.713 { 00:18:21.713 "name": "BaseBdev2", 00:18:21.713 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:21.713 "is_configured": true, 00:18:21.713 "data_offset": 2048, 00:18:21.713 "data_size": 63488 00:18:21.713 } 00:18:21.713 ] 00:18:21.713 }' 00:18:21.713 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.971 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.971 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.971 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.971 04:40:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.971 [2024-11-27 04:40:09.439266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:22.551 138.25 IOPS, 414.75 MiB/s [2024-11-27T04:40:10.174Z] [2024-11-27 04:40:10.122142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:22.811 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.811 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.811 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.811 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.811 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.811 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.811 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.811 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.811 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.811 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.071 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.071 [2024-11-27 04:40:10.454288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:23.071 [2024-11-27 04:40:10.454995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:23.071 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.071 "name": "raid_bdev1", 00:18:23.071 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:23.071 "strip_size_kb": 0, 00:18:23.071 "state": "online", 00:18:23.071 "raid_level": "raid1", 00:18:23.071 "superblock": true, 00:18:23.071 "num_base_bdevs": 2, 00:18:23.071 "num_base_bdevs_discovered": 2, 00:18:23.071 "num_base_bdevs_operational": 2, 00:18:23.071 "process": { 00:18:23.071 "type": "rebuild", 00:18:23.071 "target": "spare", 00:18:23.071 "progress": { 00:18:23.071 "blocks": 30720, 00:18:23.071 "percent": 48 00:18:23.071 } 00:18:23.071 }, 00:18:23.071 "base_bdevs_list": [ 00:18:23.071 { 00:18:23.071 "name": "spare", 00:18:23.071 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:23.071 "is_configured": true, 00:18:23.071 "data_offset": 2048, 00:18:23.071 "data_size": 63488 00:18:23.071 }, 00:18:23.071 { 00:18:23.071 "name": "BaseBdev2", 00:18:23.071 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:23.071 "is_configured": true, 00:18:23.071 "data_offset": 2048, 00:18:23.071 "data_size": 63488 00:18:23.071 } 00:18:23.071 ] 00:18:23.071 }' 00:18:23.071 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.071 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.071 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.071 128.60 IOPS, 385.80 MiB/s [2024-11-27T04:40:10.694Z] 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.071 04:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.071 [2024-11-27 04:40:10.573308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:23.071 [2024-11-27 04:40:10.573715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:23.648 [2024-11-27 04:40:11.041286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:23.907 [2024-11-27 04:40:11.367629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:24.166 113.17 IOPS, 339.50 MiB/s [2024-11-27T04:40:11.789Z] 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.166 [2024-11-27 04:40:11.596435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.166 "name": "raid_bdev1", 00:18:24.166 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:24.166 "strip_size_kb": 0, 00:18:24.166 "state": "online", 00:18:24.166 "raid_level": "raid1", 00:18:24.166 "superblock": true, 00:18:24.166 "num_base_bdevs": 2, 00:18:24.166 "num_base_bdevs_discovered": 2, 00:18:24.166 "num_base_bdevs_operational": 2, 00:18:24.166 "process": { 00:18:24.166 "type": "rebuild", 00:18:24.166 "target": "spare", 00:18:24.166 "progress": { 00:18:24.166 "blocks": 45056, 00:18:24.166 "percent": 70 00:18:24.166 } 00:18:24.166 }, 00:18:24.166 "base_bdevs_list": [ 00:18:24.166 { 00:18:24.166 "name": "spare", 00:18:24.166 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:24.166 "is_configured": true, 00:18:24.166 "data_offset": 2048, 00:18:24.166 "data_size": 63488 00:18:24.166 }, 00:18:24.166 { 00:18:24.166 "name": "BaseBdev2", 00:18:24.166 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:24.166 "is_configured": true, 00:18:24.166 "data_offset": 2048, 00:18:24.166 "data_size": 63488 00:18:24.166 } 00:18:24.166 ] 00:18:24.166 }' 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.166 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.167 04:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.425 [2024-11-27 04:40:11.964939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:18:24.683 [2024-11-27 04:40:12.091769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:24.941 [2024-11-27 04:40:12.448556] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:18:25.200 102.29 IOPS, 306.86 MiB/s [2024-11-27T04:40:12.823Z] [2024-11-27 04:40:12.675510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.200 "name": "raid_bdev1", 00:18:25.200 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:25.200 "strip_size_kb": 0, 00:18:25.200 "state": "online", 00:18:25.200 "raid_level": "raid1", 00:18:25.200 "superblock": true, 00:18:25.200 "num_base_bdevs": 2, 00:18:25.200 "num_base_bdevs_discovered": 2, 00:18:25.200 "num_base_bdevs_operational": 2, 00:18:25.200 "process": { 00:18:25.200 "type": "rebuild", 00:18:25.200 "target": "spare", 00:18:25.200 "progress": { 00:18:25.200 "blocks": 59392, 00:18:25.200 "percent": 93 00:18:25.200 } 00:18:25.200 }, 00:18:25.200 "base_bdevs_list": [ 00:18:25.200 { 00:18:25.200 "name": "spare", 00:18:25.200 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:25.200 "is_configured": true, 00:18:25.200 "data_offset": 2048, 00:18:25.200 "data_size": 63488 00:18:25.200 }, 00:18:25.200 { 00:18:25.200 "name": "BaseBdev2", 00:18:25.200 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:25.200 "is_configured": true, 00:18:25.200 "data_offset": 2048, 00:18:25.200 "data_size": 63488 00:18:25.200 } 00:18:25.200 ] 00:18:25.200 }' 00:18:25.200 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.459 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.459 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.459 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.459 04:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.459 [2024-11-27 04:40:13.015907] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:25.717 [2024-11-27 04:40:13.115875] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:25.717 [2024-11-27 04:40:13.118321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.542 94.00 IOPS, 282.00 MiB/s [2024-11-27T04:40:14.165Z] 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.542 "name": "raid_bdev1", 00:18:26.542 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:26.542 "strip_size_kb": 0, 00:18:26.542 "state": "online", 00:18:26.542 "raid_level": "raid1", 00:18:26.542 "superblock": true, 00:18:26.542 "num_base_bdevs": 2, 00:18:26.542 "num_base_bdevs_discovered": 2, 00:18:26.542 "num_base_bdevs_operational": 2, 00:18:26.542 "base_bdevs_list": [ 00:18:26.542 { 00:18:26.542 "name": "spare", 00:18:26.542 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:26.542 "is_configured": true, 00:18:26.542 "data_offset": 2048, 00:18:26.542 "data_size": 63488 00:18:26.542 }, 00:18:26.542 { 00:18:26.542 "name": "BaseBdev2", 00:18:26.542 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:26.542 "is_configured": true, 00:18:26.542 "data_offset": 2048, 00:18:26.542 "data_size": 63488 00:18:26.542 } 00:18:26.542 ] 00:18:26.542 }' 00:18:26.542 04:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.542 "name": "raid_bdev1", 00:18:26.542 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:26.542 "strip_size_kb": 0, 00:18:26.542 "state": "online", 00:18:26.542 "raid_level": "raid1", 00:18:26.542 "superblock": true, 00:18:26.542 "num_base_bdevs": 2, 00:18:26.542 "num_base_bdevs_discovered": 2, 00:18:26.542 "num_base_bdevs_operational": 2, 00:18:26.542 "base_bdevs_list": [ 00:18:26.542 { 00:18:26.542 "name": "spare", 00:18:26.542 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:26.542 "is_configured": true, 00:18:26.542 "data_offset": 2048, 00:18:26.542 "data_size": 63488 00:18:26.542 }, 00:18:26.542 { 00:18:26.542 "name": "BaseBdev2", 00:18:26.542 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:26.542 "is_configured": true, 00:18:26.542 "data_offset": 2048, 00:18:26.542 "data_size": 63488 00:18:26.542 } 00:18:26.542 ] 00:18:26.542 }' 00:18:26.542 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.801 "name": "raid_bdev1", 00:18:26.801 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:26.801 "strip_size_kb": 0, 00:18:26.801 "state": "online", 00:18:26.801 "raid_level": "raid1", 00:18:26.801 "superblock": true, 00:18:26.801 "num_base_bdevs": 2, 00:18:26.801 "num_base_bdevs_discovered": 2, 00:18:26.801 "num_base_bdevs_operational": 2, 00:18:26.801 "base_bdevs_list": [ 00:18:26.801 { 00:18:26.801 "name": "spare", 00:18:26.801 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:26.801 "is_configured": true, 00:18:26.801 "data_offset": 2048, 00:18:26.801 "data_size": 63488 00:18:26.801 }, 00:18:26.801 { 00:18:26.801 "name": "BaseBdev2", 00:18:26.801 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:26.801 "is_configured": true, 00:18:26.801 "data_offset": 2048, 00:18:26.801 "data_size": 63488 00:18:26.801 } 00:18:26.801 ] 00:18:26.801 }' 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.801 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.318 87.22 IOPS, 261.67 MiB/s [2024-11-27T04:40:14.941Z] 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:27.318 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.319 [2024-11-27 04:40:14.715823] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.319 [2024-11-27 04:40:14.716004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.319 00:18:27.319 Latency(us) 00:18:27.319 [2024-11-27T04:40:14.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.319 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:27.319 raid_bdev1 : 9.27 85.51 256.54 0.00 0.00 16697.34 292.31 118203.11 00:18:27.319 [2024-11-27T04:40:14.942Z] =================================================================================================================== 00:18:27.319 [2024-11-27T04:40:14.942Z] Total : 85.51 256.54 0.00 0.00 16697.34 292.31 118203.11 00:18:27.319 { 00:18:27.319 "results": [ 00:18:27.319 { 00:18:27.319 "job": "raid_bdev1", 00:18:27.319 "core_mask": "0x1", 00:18:27.319 "workload": "randrw", 00:18:27.319 "percentage": 50, 00:18:27.319 "status": "finished", 00:18:27.319 "queue_depth": 2, 00:18:27.319 "io_size": 3145728, 00:18:27.319 "runtime": 9.273399, 00:18:27.319 "iops": 85.51341315088459, 00:18:27.319 "mibps": 256.54023945265374, 00:18:27.319 "io_failed": 0, 00:18:27.319 "io_timeout": 0, 00:18:27.319 "avg_latency_us": 16697.338365241318, 00:18:27.319 "min_latency_us": 292.30545454545455, 00:18:27.319 "max_latency_us": 118203.11272727273 00:18:27.319 } 00:18:27.319 ], 00:18:27.319 "core_count": 1 00:18:27.319 } 00:18:27.319 [2024-11-27 04:40:14.828018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.319 [2024-11-27 04:40:14.828126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.319 [2024-11-27 04:40:14.828236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.319 [2024-11-27 04:40:14.828261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.319 04:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:27.578 /dev/nbd0 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.578 1+0 records in 00:18:27.578 1+0 records out 00:18:27.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614183 s, 6.7 MB/s 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.578 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:27.837 /dev/nbd1 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:28.095 1+0 records in 00:18:28.095 1+0 records out 00:18:28.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326785 s, 12.5 MB/s 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:28.095 04:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:28.661 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:28.661 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:28.661 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:28.661 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:28.662 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.920 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:28.920 [2024-11-27 04:40:16.358542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:28.920 [2024-11-27 04:40:16.358619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.920 [2024-11-27 04:40:16.358655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:28.920 [2024-11-27 04:40:16.358675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.920 [2024-11-27 04:40:16.361916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.920 [2024-11-27 04:40:16.361969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:28.920 [2024-11-27 04:40:16.362109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:28.920 [2024-11-27 04:40:16.362180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.920 [2024-11-27 04:40:16.362399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.920 spare 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:28.921 [2024-11-27 04:40:16.462564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:28.921 [2024-11-27 04:40:16.462834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:28.921 [2024-11-27 04:40:16.463312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:18:28.921 [2024-11-27 04:40:16.463600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:28.921 [2024-11-27 04:40:16.463626] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:28.921 [2024-11-27 04:40:16.463910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.921 "name": "raid_bdev1", 00:18:28.921 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:28.921 "strip_size_kb": 0, 00:18:28.921 "state": "online", 00:18:28.921 "raid_level": "raid1", 00:18:28.921 "superblock": true, 00:18:28.921 "num_base_bdevs": 2, 00:18:28.921 "num_base_bdevs_discovered": 2, 00:18:28.921 "num_base_bdevs_operational": 2, 00:18:28.921 "base_bdevs_list": [ 00:18:28.921 { 00:18:28.921 "name": "spare", 00:18:28.921 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:28.921 "is_configured": true, 00:18:28.921 "data_offset": 2048, 00:18:28.921 "data_size": 63488 00:18:28.921 }, 00:18:28.921 { 00:18:28.921 "name": "BaseBdev2", 00:18:28.921 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:28.921 "is_configured": true, 00:18:28.921 "data_offset": 2048, 00:18:28.921 "data_size": 63488 00:18:28.921 } 00:18:28.921 ] 00:18:28.921 }' 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.921 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.489 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.489 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.489 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.489 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.489 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.489 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.489 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.489 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.489 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.489 04:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.489 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.489 "name": "raid_bdev1", 00:18:29.489 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:29.489 "strip_size_kb": 0, 00:18:29.489 "state": "online", 00:18:29.489 "raid_level": "raid1", 00:18:29.489 "superblock": true, 00:18:29.489 "num_base_bdevs": 2, 00:18:29.489 "num_base_bdevs_discovered": 2, 00:18:29.489 "num_base_bdevs_operational": 2, 00:18:29.489 "base_bdevs_list": [ 00:18:29.489 { 00:18:29.489 "name": "spare", 00:18:29.489 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:29.489 "is_configured": true, 00:18:29.489 "data_offset": 2048, 00:18:29.489 "data_size": 63488 00:18:29.489 }, 00:18:29.489 { 00:18:29.489 "name": "BaseBdev2", 00:18:29.489 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:29.489 "is_configured": true, 00:18:29.489 "data_offset": 2048, 00:18:29.489 "data_size": 63488 00:18:29.489 } 00:18:29.489 ] 00:18:29.489 }' 00:18:29.489 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.489 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.489 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.747 [2024-11-27 04:40:17.184157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.747 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.748 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.748 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.748 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.748 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.748 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.748 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.748 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.748 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.748 "name": "raid_bdev1", 00:18:29.748 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:29.748 "strip_size_kb": 0, 00:18:29.748 "state": "online", 00:18:29.748 "raid_level": "raid1", 00:18:29.748 "superblock": true, 00:18:29.748 "num_base_bdevs": 2, 00:18:29.748 "num_base_bdevs_discovered": 1, 00:18:29.748 "num_base_bdevs_operational": 1, 00:18:29.748 "base_bdevs_list": [ 00:18:29.748 { 00:18:29.748 "name": null, 00:18:29.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.748 "is_configured": false, 00:18:29.748 "data_offset": 0, 00:18:29.748 "data_size": 63488 00:18:29.748 }, 00:18:29.748 { 00:18:29.748 "name": "BaseBdev2", 00:18:29.748 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:29.748 "is_configured": true, 00:18:29.748 "data_offset": 2048, 00:18:29.748 "data_size": 63488 00:18:29.748 } 00:18:29.748 ] 00:18:29.748 }' 00:18:29.748 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.748 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.332 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:30.332 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.332 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.332 [2024-11-27 04:40:17.696393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.332 [2024-11-27 04:40:17.696811] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.332 [2024-11-27 04:40:17.696842] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:30.332 [2024-11-27 04:40:17.696899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.332 [2024-11-27 04:40:17.712847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:18:30.332 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.332 04:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:30.332 [2024-11-27 04:40:17.715371] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:31.269 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.270 "name": "raid_bdev1", 00:18:31.270 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:31.270 "strip_size_kb": 0, 00:18:31.270 "state": "online", 00:18:31.270 "raid_level": "raid1", 00:18:31.270 "superblock": true, 00:18:31.270 "num_base_bdevs": 2, 00:18:31.270 "num_base_bdevs_discovered": 2, 00:18:31.270 "num_base_bdevs_operational": 2, 00:18:31.270 "process": { 00:18:31.270 "type": "rebuild", 00:18:31.270 "target": "spare", 00:18:31.270 "progress": { 00:18:31.270 "blocks": 20480, 00:18:31.270 "percent": 32 00:18:31.270 } 00:18:31.270 }, 00:18:31.270 "base_bdevs_list": [ 00:18:31.270 { 00:18:31.270 "name": "spare", 00:18:31.270 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:31.270 "is_configured": true, 00:18:31.270 "data_offset": 2048, 00:18:31.270 "data_size": 63488 00:18:31.270 }, 00:18:31.270 { 00:18:31.270 "name": "BaseBdev2", 00:18:31.270 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:31.270 "is_configured": true, 00:18:31.270 "data_offset": 2048, 00:18:31.270 "data_size": 63488 00:18:31.270 } 00:18:31.270 ] 00:18:31.270 }' 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.270 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.270 [2024-11-27 04:40:18.881315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.529 [2024-11-27 04:40:18.924587] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:31.529 [2024-11-27 04:40:18.924694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.530 [2024-11-27 04:40:18.924724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.530 [2024-11-27 04:40:18.924736] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.530 04:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.530 04:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.530 "name": "raid_bdev1", 00:18:31.530 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:31.530 "strip_size_kb": 0, 00:18:31.530 "state": "online", 00:18:31.530 "raid_level": "raid1", 00:18:31.530 "superblock": true, 00:18:31.530 "num_base_bdevs": 2, 00:18:31.530 "num_base_bdevs_discovered": 1, 00:18:31.530 "num_base_bdevs_operational": 1, 00:18:31.530 "base_bdevs_list": [ 00:18:31.530 { 00:18:31.530 "name": null, 00:18:31.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.530 "is_configured": false, 00:18:31.530 "data_offset": 0, 00:18:31.530 "data_size": 63488 00:18:31.530 }, 00:18:31.530 { 00:18:31.530 "name": "BaseBdev2", 00:18:31.530 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:31.530 "is_configured": true, 00:18:31.530 "data_offset": 2048, 00:18:31.530 "data_size": 63488 00:18:31.530 } 00:18:31.530 ] 00:18:31.530 }' 00:18:31.530 04:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.530 04:40:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:32.107 04:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:32.107 04:40:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.107 04:40:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:32.107 [2024-11-27 04:40:19.455672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:32.107 [2024-11-27 04:40:19.455773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.107 [2024-11-27 04:40:19.455837] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:32.107 [2024-11-27 04:40:19.455855] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.107 [2024-11-27 04:40:19.456483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.107 [2024-11-27 04:40:19.456530] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:32.107 [2024-11-27 04:40:19.456660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:32.107 [2024-11-27 04:40:19.456682] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:32.108 [2024-11-27 04:40:19.456701] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:32.108 [2024-11-27 04:40:19.456731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.108 [2024-11-27 04:40:19.473120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:18:32.108 spare 00:18:32.108 04:40:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.108 04:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:32.108 [2024-11-27 04:40:19.475851] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.042 "name": "raid_bdev1", 00:18:33.042 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:33.042 "strip_size_kb": 0, 00:18:33.042 "state": "online", 00:18:33.042 "raid_level": "raid1", 00:18:33.042 "superblock": true, 00:18:33.042 "num_base_bdevs": 2, 00:18:33.042 "num_base_bdevs_discovered": 2, 00:18:33.042 "num_base_bdevs_operational": 2, 00:18:33.042 "process": { 00:18:33.042 "type": "rebuild", 00:18:33.042 "target": "spare", 00:18:33.042 "progress": { 00:18:33.042 "blocks": 20480, 00:18:33.042 "percent": 32 00:18:33.042 } 00:18:33.042 }, 00:18:33.042 "base_bdevs_list": [ 00:18:33.042 { 00:18:33.042 "name": "spare", 00:18:33.042 "uuid": "f8ff6b9e-05af-52f2-9a75-59b7d082ee76", 00:18:33.042 "is_configured": true, 00:18:33.042 "data_offset": 2048, 00:18:33.042 "data_size": 63488 00:18:33.042 }, 00:18:33.042 { 00:18:33.042 "name": "BaseBdev2", 00:18:33.042 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:33.042 "is_configured": true, 00:18:33.042 "data_offset": 2048, 00:18:33.042 "data_size": 63488 00:18:33.042 } 00:18:33.042 ] 00:18:33.042 }' 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.042 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.042 [2024-11-27 04:40:20.633181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.299 [2024-11-27 04:40:20.685129] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.299 [2024-11-27 04:40:20.685476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.299 [2024-11-27 04:40:20.685513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.299 [2024-11-27 04:40:20.685535] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.299 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.299 "name": "raid_bdev1", 00:18:33.299 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:33.299 "strip_size_kb": 0, 00:18:33.299 "state": "online", 00:18:33.300 "raid_level": "raid1", 00:18:33.300 "superblock": true, 00:18:33.300 "num_base_bdevs": 2, 00:18:33.300 "num_base_bdevs_discovered": 1, 00:18:33.300 "num_base_bdevs_operational": 1, 00:18:33.300 "base_bdevs_list": [ 00:18:33.300 { 00:18:33.300 "name": null, 00:18:33.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.300 "is_configured": false, 00:18:33.300 "data_offset": 0, 00:18:33.300 "data_size": 63488 00:18:33.300 }, 00:18:33.300 { 00:18:33.300 "name": "BaseBdev2", 00:18:33.300 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:33.300 "is_configured": true, 00:18:33.300 "data_offset": 2048, 00:18:33.300 "data_size": 63488 00:18:33.300 } 00:18:33.300 ] 00:18:33.300 }' 00:18:33.300 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.300 04:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.864 "name": "raid_bdev1", 00:18:33.864 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:33.864 "strip_size_kb": 0, 00:18:33.864 "state": "online", 00:18:33.864 "raid_level": "raid1", 00:18:33.864 "superblock": true, 00:18:33.864 "num_base_bdevs": 2, 00:18:33.864 "num_base_bdevs_discovered": 1, 00:18:33.864 "num_base_bdevs_operational": 1, 00:18:33.864 "base_bdevs_list": [ 00:18:33.864 { 00:18:33.864 "name": null, 00:18:33.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.864 "is_configured": false, 00:18:33.864 "data_offset": 0, 00:18:33.864 "data_size": 63488 00:18:33.864 }, 00:18:33.864 { 00:18:33.864 "name": "BaseBdev2", 00:18:33.864 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:33.864 "is_configured": true, 00:18:33.864 "data_offset": 2048, 00:18:33.864 "data_size": 63488 00:18:33.864 } 00:18:33.864 ] 00:18:33.864 }' 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.864 [2024-11-27 04:40:21.420471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:33.864 [2024-11-27 04:40:21.420548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.864 [2024-11-27 04:40:21.420587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:33.864 [2024-11-27 04:40:21.420609] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.864 [2024-11-27 04:40:21.421196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.864 [2024-11-27 04:40:21.421248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:33.864 [2024-11-27 04:40:21.421350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:33.864 [2024-11-27 04:40:21.421379] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:33.864 [2024-11-27 04:40:21.421397] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:33.864 [2024-11-27 04:40:21.421412] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:33.864 BaseBdev1 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.864 04:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.235 "name": "raid_bdev1", 00:18:35.235 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:35.235 "strip_size_kb": 0, 00:18:35.235 "state": "online", 00:18:35.235 "raid_level": "raid1", 00:18:35.235 "superblock": true, 00:18:35.235 "num_base_bdevs": 2, 00:18:35.235 "num_base_bdevs_discovered": 1, 00:18:35.235 "num_base_bdevs_operational": 1, 00:18:35.235 "base_bdevs_list": [ 00:18:35.235 { 00:18:35.235 "name": null, 00:18:35.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.235 "is_configured": false, 00:18:35.235 "data_offset": 0, 00:18:35.235 "data_size": 63488 00:18:35.235 }, 00:18:35.235 { 00:18:35.235 "name": "BaseBdev2", 00:18:35.235 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:35.235 "is_configured": true, 00:18:35.235 "data_offset": 2048, 00:18:35.235 "data_size": 63488 00:18:35.235 } 00:18:35.235 ] 00:18:35.235 }' 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.235 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.492 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.492 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.492 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.492 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.492 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.493 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.493 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.493 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.493 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.493 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.493 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.493 "name": "raid_bdev1", 00:18:35.493 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:35.493 "strip_size_kb": 0, 00:18:35.493 "state": "online", 00:18:35.493 "raid_level": "raid1", 00:18:35.493 "superblock": true, 00:18:35.493 "num_base_bdevs": 2, 00:18:35.493 "num_base_bdevs_discovered": 1, 00:18:35.493 "num_base_bdevs_operational": 1, 00:18:35.493 "base_bdevs_list": [ 00:18:35.493 { 00:18:35.493 "name": null, 00:18:35.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.493 "is_configured": false, 00:18:35.493 "data_offset": 0, 00:18:35.493 "data_size": 63488 00:18:35.493 }, 00:18:35.493 { 00:18:35.493 "name": "BaseBdev2", 00:18:35.493 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:35.493 "is_configured": true, 00:18:35.493 "data_offset": 2048, 00:18:35.493 "data_size": 63488 00:18:35.493 } 00:18:35.493 ] 00:18:35.493 }' 00:18:35.493 04:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.493 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.493 [2024-11-27 04:40:23.109314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.493 [2024-11-27 04:40:23.109526] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:35.493 [2024-11-27 04:40:23.109547] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:35.751 request: 00:18:35.751 { 00:18:35.751 "base_bdev": "BaseBdev1", 00:18:35.751 "raid_bdev": "raid_bdev1", 00:18:35.751 "method": "bdev_raid_add_base_bdev", 00:18:35.751 "req_id": 1 00:18:35.751 } 00:18:35.751 Got JSON-RPC error response 00:18:35.751 response: 00:18:35.751 { 00:18:35.751 "code": -22, 00:18:35.751 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:35.751 } 00:18:35.751 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:35.751 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:35.751 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.751 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:35.751 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.751 04:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.686 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.686 "name": "raid_bdev1", 00:18:36.686 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:36.686 "strip_size_kb": 0, 00:18:36.686 "state": "online", 00:18:36.687 "raid_level": "raid1", 00:18:36.687 "superblock": true, 00:18:36.687 "num_base_bdevs": 2, 00:18:36.687 "num_base_bdevs_discovered": 1, 00:18:36.687 "num_base_bdevs_operational": 1, 00:18:36.687 "base_bdevs_list": [ 00:18:36.687 { 00:18:36.687 "name": null, 00:18:36.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.687 "is_configured": false, 00:18:36.687 "data_offset": 0, 00:18:36.687 "data_size": 63488 00:18:36.687 }, 00:18:36.687 { 00:18:36.687 "name": "BaseBdev2", 00:18:36.687 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:36.687 "is_configured": true, 00:18:36.687 "data_offset": 2048, 00:18:36.687 "data_size": 63488 00:18:36.687 } 00:18:36.687 ] 00:18:36.687 }' 00:18:36.687 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.687 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.255 "name": "raid_bdev1", 00:18:37.255 "uuid": "ed1b168f-5797-4871-97c6-d9069c7657ee", 00:18:37.255 "strip_size_kb": 0, 00:18:37.255 "state": "online", 00:18:37.255 "raid_level": "raid1", 00:18:37.255 "superblock": true, 00:18:37.255 "num_base_bdevs": 2, 00:18:37.255 "num_base_bdevs_discovered": 1, 00:18:37.255 "num_base_bdevs_operational": 1, 00:18:37.255 "base_bdevs_list": [ 00:18:37.255 { 00:18:37.255 "name": null, 00:18:37.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.255 "is_configured": false, 00:18:37.255 "data_offset": 0, 00:18:37.255 "data_size": 63488 00:18:37.255 }, 00:18:37.255 { 00:18:37.255 "name": "BaseBdev2", 00:18:37.255 "uuid": "a16fe677-914b-55f9-8e03-ff79e766319f", 00:18:37.255 "is_configured": true, 00:18:37.255 "data_offset": 2048, 00:18:37.255 "data_size": 63488 00:18:37.255 } 00:18:37.255 ] 00:18:37.255 }' 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77210 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77210 ']' 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77210 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77210 00:18:37.255 killing process with pid 77210 00:18:37.255 Received shutdown signal, test time was about 19.272497 seconds 00:18:37.255 00:18:37.255 Latency(us) 00:18:37.255 [2024-11-27T04:40:24.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.255 [2024-11-27T04:40:24.878Z] =================================================================================================================== 00:18:37.255 [2024-11-27T04:40:24.878Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77210' 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77210 00:18:37.255 [2024-11-27 04:40:24.806851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.255 04:40:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77210 00:18:37.255 [2024-11-27 04:40:24.807028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.255 [2024-11-27 04:40:24.807113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.255 [2024-11-27 04:40:24.807131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:37.514 [2024-11-27 04:40:25.013301] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.947 ************************************ 00:18:38.947 END TEST raid_rebuild_test_sb_io 00:18:38.947 ************************************ 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:38.947 00:18:38.947 real 0m22.538s 00:18:38.947 user 0m30.287s 00:18:38.947 sys 0m1.964s 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.947 04:40:26 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:18:38.947 04:40:26 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:18:38.947 04:40:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:38.947 04:40:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.947 04:40:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.947 ************************************ 00:18:38.947 START TEST raid_rebuild_test 00:18:38.947 ************************************ 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77930 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:38.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77930 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77930 ']' 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.947 04:40:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.948 04:40:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.948 04:40:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.948 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:38.948 Zero copy mechanism will not be used. 00:18:38.948 [2024-11-27 04:40:26.311124] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:18:38.948 [2024-11-27 04:40:26.311338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77930 ] 00:18:38.948 [2024-11-27 04:40:26.510050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.206 [2024-11-27 04:40:26.667741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.466 [2024-11-27 04:40:26.886875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.466 [2024-11-27 04:40:26.886931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.725 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.725 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:39.725 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.725 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:39.725 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.725 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.984 BaseBdev1_malloc 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.984 [2024-11-27 04:40:27.396489] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:39.984 [2024-11-27 04:40:27.396568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.984 [2024-11-27 04:40:27.396601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:39.984 [2024-11-27 04:40:27.396621] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.984 [2024-11-27 04:40:27.399438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.984 [2024-11-27 04:40:27.399491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:39.984 BaseBdev1 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.984 BaseBdev2_malloc 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.984 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.984 [2024-11-27 04:40:27.453735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:39.984 [2024-11-27 04:40:27.453834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.984 [2024-11-27 04:40:27.453867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:39.984 [2024-11-27 04:40:27.453885] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.985 [2024-11-27 04:40:27.456681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.985 [2024-11-27 04:40:27.456731] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:39.985 BaseBdev2 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.985 BaseBdev3_malloc 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.985 [2024-11-27 04:40:27.521125] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:39.985 [2024-11-27 04:40:27.521223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.985 [2024-11-27 04:40:27.521254] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:39.985 [2024-11-27 04:40:27.521272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.985 [2024-11-27 04:40:27.524536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.985 [2024-11-27 04:40:27.524587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:39.985 BaseBdev3 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.985 BaseBdev4_malloc 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.985 [2024-11-27 04:40:27.578019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:39.985 [2024-11-27 04:40:27.578096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.985 [2024-11-27 04:40:27.578127] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:39.985 [2024-11-27 04:40:27.578147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.985 [2024-11-27 04:40:27.580948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.985 [2024-11-27 04:40:27.581015] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:39.985 BaseBdev4 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.985 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.244 spare_malloc 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.244 spare_delay 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.244 [2024-11-27 04:40:27.639027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:40.244 [2024-11-27 04:40:27.639123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.244 [2024-11-27 04:40:27.639149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:40.244 [2024-11-27 04:40:27.639166] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.244 [2024-11-27 04:40:27.642037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.244 [2024-11-27 04:40:27.642087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:40.244 spare 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.244 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.245 [2024-11-27 04:40:27.651046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.245 [2024-11-27 04:40:27.653451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:40.245 [2024-11-27 04:40:27.653672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:40.245 [2024-11-27 04:40:27.653795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:40.245 [2024-11-27 04:40:27.653914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:40.245 [2024-11-27 04:40:27.653938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:40.245 [2024-11-27 04:40:27.654278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:40.245 [2024-11-27 04:40:27.654498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:40.245 [2024-11-27 04:40:27.654518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:40.245 [2024-11-27 04:40:27.654738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.245 "name": "raid_bdev1", 00:18:40.245 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:40.245 "strip_size_kb": 0, 00:18:40.245 "state": "online", 00:18:40.245 "raid_level": "raid1", 00:18:40.245 "superblock": false, 00:18:40.245 "num_base_bdevs": 4, 00:18:40.245 "num_base_bdevs_discovered": 4, 00:18:40.245 "num_base_bdevs_operational": 4, 00:18:40.245 "base_bdevs_list": [ 00:18:40.245 { 00:18:40.245 "name": "BaseBdev1", 00:18:40.245 "uuid": "42717ac8-420b-5f20-ae99-dc27e1d762bd", 00:18:40.245 "is_configured": true, 00:18:40.245 "data_offset": 0, 00:18:40.245 "data_size": 65536 00:18:40.245 }, 00:18:40.245 { 00:18:40.245 "name": "BaseBdev2", 00:18:40.245 "uuid": "4ff983e2-a385-5153-98b3-cb20aa8443ee", 00:18:40.245 "is_configured": true, 00:18:40.245 "data_offset": 0, 00:18:40.245 "data_size": 65536 00:18:40.245 }, 00:18:40.245 { 00:18:40.245 "name": "BaseBdev3", 00:18:40.245 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:40.245 "is_configured": true, 00:18:40.245 "data_offset": 0, 00:18:40.245 "data_size": 65536 00:18:40.245 }, 00:18:40.245 { 00:18:40.245 "name": "BaseBdev4", 00:18:40.245 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:40.245 "is_configured": true, 00:18:40.245 "data_offset": 0, 00:18:40.245 "data_size": 65536 00:18:40.245 } 00:18:40.245 ] 00:18:40.245 }' 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.245 04:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.811 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:40.811 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.811 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.811 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.811 [2024-11-27 04:40:28.167681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.811 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.811 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:40.811 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.811 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.812 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:41.070 [2024-11-27 04:40:28.567362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:41.070 /dev/nbd0 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:41.070 1+0 records in 00:18:41.070 1+0 records out 00:18:41.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368231 s, 11.1 MB/s 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:41.070 04:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:51.046 65536+0 records in 00:18:51.046 65536+0 records out 00:18:51.046 33554432 bytes (34 MB, 32 MiB) copied, 8.79351 s, 3.8 MB/s 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:51.046 [2024-11-27 04:40:37.728328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.046 [2024-11-27 04:40:37.740439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.046 "name": "raid_bdev1", 00:18:51.046 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:51.046 "strip_size_kb": 0, 00:18:51.046 "state": "online", 00:18:51.046 "raid_level": "raid1", 00:18:51.046 "superblock": false, 00:18:51.046 "num_base_bdevs": 4, 00:18:51.046 "num_base_bdevs_discovered": 3, 00:18:51.046 "num_base_bdevs_operational": 3, 00:18:51.046 "base_bdevs_list": [ 00:18:51.046 { 00:18:51.046 "name": null, 00:18:51.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.046 "is_configured": false, 00:18:51.046 "data_offset": 0, 00:18:51.046 "data_size": 65536 00:18:51.046 }, 00:18:51.046 { 00:18:51.046 "name": "BaseBdev2", 00:18:51.046 "uuid": "4ff983e2-a385-5153-98b3-cb20aa8443ee", 00:18:51.046 "is_configured": true, 00:18:51.046 "data_offset": 0, 00:18:51.046 "data_size": 65536 00:18:51.046 }, 00:18:51.046 { 00:18:51.046 "name": "BaseBdev3", 00:18:51.046 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:51.046 "is_configured": true, 00:18:51.046 "data_offset": 0, 00:18:51.046 "data_size": 65536 00:18:51.046 }, 00:18:51.046 { 00:18:51.046 "name": "BaseBdev4", 00:18:51.046 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:51.046 "is_configured": true, 00:18:51.046 "data_offset": 0, 00:18:51.046 "data_size": 65536 00:18:51.046 } 00:18:51.046 ] 00:18:51.046 }' 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.046 04:40:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.046 04:40:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:51.046 04:40:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.046 04:40:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.046 [2024-11-27 04:40:38.236598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.046 [2024-11-27 04:40:38.252055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:18:51.046 04:40:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.046 04:40:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:51.046 [2024-11-27 04:40:38.254535] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.982 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.983 "name": "raid_bdev1", 00:18:51.983 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:51.983 "strip_size_kb": 0, 00:18:51.983 "state": "online", 00:18:51.983 "raid_level": "raid1", 00:18:51.983 "superblock": false, 00:18:51.983 "num_base_bdevs": 4, 00:18:51.983 "num_base_bdevs_discovered": 4, 00:18:51.983 "num_base_bdevs_operational": 4, 00:18:51.983 "process": { 00:18:51.983 "type": "rebuild", 00:18:51.983 "target": "spare", 00:18:51.983 "progress": { 00:18:51.983 "blocks": 20480, 00:18:51.983 "percent": 31 00:18:51.983 } 00:18:51.983 }, 00:18:51.983 "base_bdevs_list": [ 00:18:51.983 { 00:18:51.983 "name": "spare", 00:18:51.983 "uuid": "f44af28e-d3f1-5638-8e76-77bf5eebdd15", 00:18:51.983 "is_configured": true, 00:18:51.983 "data_offset": 0, 00:18:51.983 "data_size": 65536 00:18:51.983 }, 00:18:51.983 { 00:18:51.983 "name": "BaseBdev2", 00:18:51.983 "uuid": "4ff983e2-a385-5153-98b3-cb20aa8443ee", 00:18:51.983 "is_configured": true, 00:18:51.983 "data_offset": 0, 00:18:51.983 "data_size": 65536 00:18:51.983 }, 00:18:51.983 { 00:18:51.983 "name": "BaseBdev3", 00:18:51.983 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:51.983 "is_configured": true, 00:18:51.983 "data_offset": 0, 00:18:51.983 "data_size": 65536 00:18:51.983 }, 00:18:51.983 { 00:18:51.983 "name": "BaseBdev4", 00:18:51.983 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:51.983 "is_configured": true, 00:18:51.983 "data_offset": 0, 00:18:51.983 "data_size": 65536 00:18:51.983 } 00:18:51.983 ] 00:18:51.983 }' 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.983 [2024-11-27 04:40:39.407697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.983 [2024-11-27 04:40:39.463580] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.983 [2024-11-27 04:40:39.463712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.983 [2024-11-27 04:40:39.463749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.983 [2024-11-27 04:40:39.463787] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.983 "name": "raid_bdev1", 00:18:51.983 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:51.983 "strip_size_kb": 0, 00:18:51.983 "state": "online", 00:18:51.983 "raid_level": "raid1", 00:18:51.983 "superblock": false, 00:18:51.983 "num_base_bdevs": 4, 00:18:51.983 "num_base_bdevs_discovered": 3, 00:18:51.983 "num_base_bdevs_operational": 3, 00:18:51.983 "base_bdevs_list": [ 00:18:51.983 { 00:18:51.983 "name": null, 00:18:51.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.983 "is_configured": false, 00:18:51.983 "data_offset": 0, 00:18:51.983 "data_size": 65536 00:18:51.983 }, 00:18:51.983 { 00:18:51.983 "name": "BaseBdev2", 00:18:51.983 "uuid": "4ff983e2-a385-5153-98b3-cb20aa8443ee", 00:18:51.983 "is_configured": true, 00:18:51.983 "data_offset": 0, 00:18:51.983 "data_size": 65536 00:18:51.983 }, 00:18:51.983 { 00:18:51.983 "name": "BaseBdev3", 00:18:51.983 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:51.983 "is_configured": true, 00:18:51.983 "data_offset": 0, 00:18:51.983 "data_size": 65536 00:18:51.983 }, 00:18:51.983 { 00:18:51.983 "name": "BaseBdev4", 00:18:51.983 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:51.983 "is_configured": true, 00:18:51.983 "data_offset": 0, 00:18:51.983 "data_size": 65536 00:18:51.983 } 00:18:51.983 ] 00:18:51.983 }' 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.983 04:40:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.551 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.551 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.551 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.551 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.551 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.551 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.551 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.551 04:40:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.551 04:40:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.551 04:40:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.552 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.552 "name": "raid_bdev1", 00:18:52.552 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:52.552 "strip_size_kb": 0, 00:18:52.552 "state": "online", 00:18:52.552 "raid_level": "raid1", 00:18:52.552 "superblock": false, 00:18:52.552 "num_base_bdevs": 4, 00:18:52.552 "num_base_bdevs_discovered": 3, 00:18:52.552 "num_base_bdevs_operational": 3, 00:18:52.552 "base_bdevs_list": [ 00:18:52.552 { 00:18:52.552 "name": null, 00:18:52.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.552 "is_configured": false, 00:18:52.552 "data_offset": 0, 00:18:52.552 "data_size": 65536 00:18:52.552 }, 00:18:52.552 { 00:18:52.552 "name": "BaseBdev2", 00:18:52.552 "uuid": "4ff983e2-a385-5153-98b3-cb20aa8443ee", 00:18:52.552 "is_configured": true, 00:18:52.552 "data_offset": 0, 00:18:52.552 "data_size": 65536 00:18:52.552 }, 00:18:52.552 { 00:18:52.552 "name": "BaseBdev3", 00:18:52.552 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:52.552 "is_configured": true, 00:18:52.552 "data_offset": 0, 00:18:52.552 "data_size": 65536 00:18:52.552 }, 00:18:52.552 { 00:18:52.552 "name": "BaseBdev4", 00:18:52.552 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:52.552 "is_configured": true, 00:18:52.552 "data_offset": 0, 00:18:52.552 "data_size": 65536 00:18:52.552 } 00:18:52.552 ] 00:18:52.552 }' 00:18:52.552 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.552 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.552 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.552 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.552 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:52.552 04:40:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.552 04:40:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.552 [2024-11-27 04:40:40.147849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.552 [2024-11-27 04:40:40.161507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:18:52.552 04:40:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.552 04:40:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:52.552 [2024-11-27 04:40:40.164026] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.963 "name": "raid_bdev1", 00:18:53.963 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:53.963 "strip_size_kb": 0, 00:18:53.963 "state": "online", 00:18:53.963 "raid_level": "raid1", 00:18:53.963 "superblock": false, 00:18:53.963 "num_base_bdevs": 4, 00:18:53.963 "num_base_bdevs_discovered": 4, 00:18:53.963 "num_base_bdevs_operational": 4, 00:18:53.963 "process": { 00:18:53.963 "type": "rebuild", 00:18:53.963 "target": "spare", 00:18:53.963 "progress": { 00:18:53.963 "blocks": 20480, 00:18:53.963 "percent": 31 00:18:53.963 } 00:18:53.963 }, 00:18:53.963 "base_bdevs_list": [ 00:18:53.963 { 00:18:53.963 "name": "spare", 00:18:53.963 "uuid": "f44af28e-d3f1-5638-8e76-77bf5eebdd15", 00:18:53.963 "is_configured": true, 00:18:53.963 "data_offset": 0, 00:18:53.963 "data_size": 65536 00:18:53.963 }, 00:18:53.963 { 00:18:53.963 "name": "BaseBdev2", 00:18:53.963 "uuid": "4ff983e2-a385-5153-98b3-cb20aa8443ee", 00:18:53.963 "is_configured": true, 00:18:53.963 "data_offset": 0, 00:18:53.963 "data_size": 65536 00:18:53.963 }, 00:18:53.963 { 00:18:53.963 "name": "BaseBdev3", 00:18:53.963 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:53.963 "is_configured": true, 00:18:53.963 "data_offset": 0, 00:18:53.963 "data_size": 65536 00:18:53.963 }, 00:18:53.963 { 00:18:53.963 "name": "BaseBdev4", 00:18:53.963 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:53.963 "is_configured": true, 00:18:53.963 "data_offset": 0, 00:18:53.963 "data_size": 65536 00:18:53.963 } 00:18:53.963 ] 00:18:53.963 }' 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.963 [2024-11-27 04:40:41.325144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:53.963 [2024-11-27 04:40:41.373132] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.963 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.964 "name": "raid_bdev1", 00:18:53.964 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:53.964 "strip_size_kb": 0, 00:18:53.964 "state": "online", 00:18:53.964 "raid_level": "raid1", 00:18:53.964 "superblock": false, 00:18:53.964 "num_base_bdevs": 4, 00:18:53.964 "num_base_bdevs_discovered": 3, 00:18:53.964 "num_base_bdevs_operational": 3, 00:18:53.964 "process": { 00:18:53.964 "type": "rebuild", 00:18:53.964 "target": "spare", 00:18:53.964 "progress": { 00:18:53.964 "blocks": 24576, 00:18:53.964 "percent": 37 00:18:53.964 } 00:18:53.964 }, 00:18:53.964 "base_bdevs_list": [ 00:18:53.964 { 00:18:53.964 "name": "spare", 00:18:53.964 "uuid": "f44af28e-d3f1-5638-8e76-77bf5eebdd15", 00:18:53.964 "is_configured": true, 00:18:53.964 "data_offset": 0, 00:18:53.964 "data_size": 65536 00:18:53.964 }, 00:18:53.964 { 00:18:53.964 "name": null, 00:18:53.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.964 "is_configured": false, 00:18:53.964 "data_offset": 0, 00:18:53.964 "data_size": 65536 00:18:53.964 }, 00:18:53.964 { 00:18:53.964 "name": "BaseBdev3", 00:18:53.964 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:53.964 "is_configured": true, 00:18:53.964 "data_offset": 0, 00:18:53.964 "data_size": 65536 00:18:53.964 }, 00:18:53.964 { 00:18:53.964 "name": "BaseBdev4", 00:18:53.964 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:53.964 "is_configured": true, 00:18:53.964 "data_offset": 0, 00:18:53.964 "data_size": 65536 00:18:53.964 } 00:18:53.964 ] 00:18:53.964 }' 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=482 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.964 "name": "raid_bdev1", 00:18:53.964 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:53.964 "strip_size_kb": 0, 00:18:53.964 "state": "online", 00:18:53.964 "raid_level": "raid1", 00:18:53.964 "superblock": false, 00:18:53.964 "num_base_bdevs": 4, 00:18:53.964 "num_base_bdevs_discovered": 3, 00:18:53.964 "num_base_bdevs_operational": 3, 00:18:53.964 "process": { 00:18:53.964 "type": "rebuild", 00:18:53.964 "target": "spare", 00:18:53.964 "progress": { 00:18:53.964 "blocks": 26624, 00:18:53.964 "percent": 40 00:18:53.964 } 00:18:53.964 }, 00:18:53.964 "base_bdevs_list": [ 00:18:53.964 { 00:18:53.964 "name": "spare", 00:18:53.964 "uuid": "f44af28e-d3f1-5638-8e76-77bf5eebdd15", 00:18:53.964 "is_configured": true, 00:18:53.964 "data_offset": 0, 00:18:53.964 "data_size": 65536 00:18:53.964 }, 00:18:53.964 { 00:18:53.964 "name": null, 00:18:53.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.964 "is_configured": false, 00:18:53.964 "data_offset": 0, 00:18:53.964 "data_size": 65536 00:18:53.964 }, 00:18:53.964 { 00:18:53.964 "name": "BaseBdev3", 00:18:53.964 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:53.964 "is_configured": true, 00:18:53.964 "data_offset": 0, 00:18:53.964 "data_size": 65536 00:18:53.964 }, 00:18:53.964 { 00:18:53.964 "name": "BaseBdev4", 00:18:53.964 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:53.964 "is_configured": true, 00:18:53.964 "data_offset": 0, 00:18:53.964 "data_size": 65536 00:18:53.964 } 00:18:53.964 ] 00:18:53.964 }' 00:18:53.964 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.222 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.222 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.222 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.222 04:40:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.157 "name": "raid_bdev1", 00:18:55.157 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:55.157 "strip_size_kb": 0, 00:18:55.157 "state": "online", 00:18:55.157 "raid_level": "raid1", 00:18:55.157 "superblock": false, 00:18:55.157 "num_base_bdevs": 4, 00:18:55.157 "num_base_bdevs_discovered": 3, 00:18:55.157 "num_base_bdevs_operational": 3, 00:18:55.157 "process": { 00:18:55.157 "type": "rebuild", 00:18:55.157 "target": "spare", 00:18:55.157 "progress": { 00:18:55.157 "blocks": 51200, 00:18:55.157 "percent": 78 00:18:55.157 } 00:18:55.157 }, 00:18:55.157 "base_bdevs_list": [ 00:18:55.157 { 00:18:55.157 "name": "spare", 00:18:55.157 "uuid": "f44af28e-d3f1-5638-8e76-77bf5eebdd15", 00:18:55.157 "is_configured": true, 00:18:55.157 "data_offset": 0, 00:18:55.157 "data_size": 65536 00:18:55.157 }, 00:18:55.157 { 00:18:55.157 "name": null, 00:18:55.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.157 "is_configured": false, 00:18:55.157 "data_offset": 0, 00:18:55.157 "data_size": 65536 00:18:55.157 }, 00:18:55.157 { 00:18:55.157 "name": "BaseBdev3", 00:18:55.157 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:55.157 "is_configured": true, 00:18:55.157 "data_offset": 0, 00:18:55.157 "data_size": 65536 00:18:55.157 }, 00:18:55.157 { 00:18:55.157 "name": "BaseBdev4", 00:18:55.157 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:55.157 "is_configured": true, 00:18:55.157 "data_offset": 0, 00:18:55.157 "data_size": 65536 00:18:55.157 } 00:18:55.157 ] 00:18:55.157 }' 00:18:55.157 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.416 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.416 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.416 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.416 04:40:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.983 [2024-11-27 04:40:43.388529] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:55.983 [2024-11-27 04:40:43.388645] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:55.983 [2024-11-27 04:40:43.388714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.242 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.242 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.242 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.242 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.242 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.242 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.242 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.242 04:40:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.242 04:40:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.242 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.501 04:40:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.501 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.501 "name": "raid_bdev1", 00:18:56.501 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:56.501 "strip_size_kb": 0, 00:18:56.501 "state": "online", 00:18:56.501 "raid_level": "raid1", 00:18:56.501 "superblock": false, 00:18:56.501 "num_base_bdevs": 4, 00:18:56.501 "num_base_bdevs_discovered": 3, 00:18:56.501 "num_base_bdevs_operational": 3, 00:18:56.501 "base_bdevs_list": [ 00:18:56.501 { 00:18:56.501 "name": "spare", 00:18:56.501 "uuid": "f44af28e-d3f1-5638-8e76-77bf5eebdd15", 00:18:56.501 "is_configured": true, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 }, 00:18:56.501 { 00:18:56.501 "name": null, 00:18:56.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.501 "is_configured": false, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 }, 00:18:56.501 { 00:18:56.501 "name": "BaseBdev3", 00:18:56.501 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:56.501 "is_configured": true, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 }, 00:18:56.501 { 00:18:56.501 "name": "BaseBdev4", 00:18:56.501 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:56.501 "is_configured": true, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 } 00:18:56.501 ] 00:18:56.501 }' 00:18:56.501 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.501 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:56.501 04:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.501 "name": "raid_bdev1", 00:18:56.501 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:56.501 "strip_size_kb": 0, 00:18:56.501 "state": "online", 00:18:56.501 "raid_level": "raid1", 00:18:56.501 "superblock": false, 00:18:56.501 "num_base_bdevs": 4, 00:18:56.501 "num_base_bdevs_discovered": 3, 00:18:56.501 "num_base_bdevs_operational": 3, 00:18:56.501 "base_bdevs_list": [ 00:18:56.501 { 00:18:56.501 "name": "spare", 00:18:56.501 "uuid": "f44af28e-d3f1-5638-8e76-77bf5eebdd15", 00:18:56.501 "is_configured": true, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 }, 00:18:56.501 { 00:18:56.501 "name": null, 00:18:56.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.501 "is_configured": false, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 }, 00:18:56.501 { 00:18:56.501 "name": "BaseBdev3", 00:18:56.501 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:56.501 "is_configured": true, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 }, 00:18:56.501 { 00:18:56.501 "name": "BaseBdev4", 00:18:56.501 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:56.501 "is_configured": true, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 } 00:18:56.501 ] 00:18:56.501 }' 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.501 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.760 "name": "raid_bdev1", 00:18:56.760 "uuid": "691c3f0c-9a0a-4cb6-b66f-c514cd015adb", 00:18:56.760 "strip_size_kb": 0, 00:18:56.760 "state": "online", 00:18:56.760 "raid_level": "raid1", 00:18:56.760 "superblock": false, 00:18:56.760 "num_base_bdevs": 4, 00:18:56.760 "num_base_bdevs_discovered": 3, 00:18:56.760 "num_base_bdevs_operational": 3, 00:18:56.760 "base_bdevs_list": [ 00:18:56.760 { 00:18:56.760 "name": "spare", 00:18:56.760 "uuid": "f44af28e-d3f1-5638-8e76-77bf5eebdd15", 00:18:56.760 "is_configured": true, 00:18:56.760 "data_offset": 0, 00:18:56.760 "data_size": 65536 00:18:56.760 }, 00:18:56.760 { 00:18:56.760 "name": null, 00:18:56.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.760 "is_configured": false, 00:18:56.760 "data_offset": 0, 00:18:56.760 "data_size": 65536 00:18:56.760 }, 00:18:56.760 { 00:18:56.760 "name": "BaseBdev3", 00:18:56.760 "uuid": "a6f3d9a6-72b9-5d77-bf5c-a3882efe05eb", 00:18:56.760 "is_configured": true, 00:18:56.760 "data_offset": 0, 00:18:56.760 "data_size": 65536 00:18:56.760 }, 00:18:56.760 { 00:18:56.760 "name": "BaseBdev4", 00:18:56.760 "uuid": "929bb0ce-0b49-5164-a205-77142f3665b1", 00:18:56.760 "is_configured": true, 00:18:56.760 "data_offset": 0, 00:18:56.760 "data_size": 65536 00:18:56.760 } 00:18:56.760 ] 00:18:56.760 }' 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.760 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.020 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:57.020 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.020 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.020 [2024-11-27 04:40:44.632639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.020 [2024-11-27 04:40:44.632689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.020 [2024-11-27 04:40:44.632838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.020 [2024-11-27 04:40:44.632953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.020 [2024-11-27 04:40:44.632971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:57.020 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.020 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.020 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.020 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.020 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:57.287 04:40:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:57.552 /dev/nbd0 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:57.552 1+0 records in 00:18:57.552 1+0 records out 00:18:57.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310533 s, 13.2 MB/s 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:57.552 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:57.809 /dev/nbd1 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:57.809 1+0 records in 00:18:57.809 1+0 records out 00:18:57.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436241 s, 9.4 MB/s 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:57.809 04:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:58.067 04:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:58.067 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:58.067 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:58.067 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:58.067 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:58.067 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:58.067 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:58.325 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:58.325 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:58.325 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:58.325 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.325 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.325 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:58.325 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:58.325 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.325 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:58.325 04:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77930 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77930 ']' 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77930 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77930 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.588 killing process with pid 77930 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77930' 00:18:58.588 Received shutdown signal, test time was about 60.000000 seconds 00:18:58.588 00:18:58.588 Latency(us) 00:18:58.588 [2024-11-27T04:40:46.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.588 [2024-11-27T04:40:46.211Z] =================================================================================================================== 00:18:58.588 [2024-11-27T04:40:46.211Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77930 00:18:58.588 04:40:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77930 00:18:58.588 [2024-11-27 04:40:46.158651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.154 [2024-11-27 04:40:46.598531] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:00.090 04:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:00.090 00:19:00.090 real 0m21.465s 00:19:00.090 user 0m24.274s 00:19:00.090 sys 0m3.725s 00:19:00.090 04:40:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.090 ************************************ 00:19:00.090 END TEST raid_rebuild_test 00:19:00.090 ************************************ 00:19:00.090 04:40:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.090 04:40:47 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:19:00.090 04:40:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:00.090 04:40:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.090 04:40:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:00.348 ************************************ 00:19:00.348 START TEST raid_rebuild_test_sb 00:19:00.348 ************************************ 00:19:00.348 04:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:19:00.348 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:00.348 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:00.348 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78412 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78412 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78412 ']' 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.349 04:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.349 [2024-11-27 04:40:47.829261] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:00.349 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:00.349 Zero copy mechanism will not be used. 00:19:00.349 [2024-11-27 04:40:47.829445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78412 ] 00:19:00.607 [2024-11-27 04:40:48.019345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.607 [2024-11-27 04:40:48.180980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.866 [2024-11-27 04:40:48.400803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.866 [2024-11-27 04:40:48.400888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.432 BaseBdev1_malloc 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.432 [2024-11-27 04:40:48.893864] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:01.432 [2024-11-27 04:40:48.893942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.432 [2024-11-27 04:40:48.893974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:01.432 [2024-11-27 04:40:48.893993] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.432 [2024-11-27 04:40:48.896761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.432 [2024-11-27 04:40:48.896823] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:01.432 BaseBdev1 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.432 BaseBdev2_malloc 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.432 [2024-11-27 04:40:48.942307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:01.432 [2024-11-27 04:40:48.942384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.432 [2024-11-27 04:40:48.942417] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:01.432 [2024-11-27 04:40:48.942435] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.432 [2024-11-27 04:40:48.945211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.432 [2024-11-27 04:40:48.945257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:01.432 BaseBdev2 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.432 BaseBdev3_malloc 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.432 04:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.432 [2024-11-27 04:40:49.001856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:01.432 [2024-11-27 04:40:49.001925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.432 [2024-11-27 04:40:49.001958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:01.433 [2024-11-27 04:40:49.001977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.433 [2024-11-27 04:40:49.004725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.433 [2024-11-27 04:40:49.004788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:01.433 BaseBdev3 00:19:01.433 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.433 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:01.433 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:01.433 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.433 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.433 BaseBdev4_malloc 00:19:01.433 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.433 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:01.433 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.433 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.433 [2024-11-27 04:40:49.050513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:01.433 [2024-11-27 04:40:49.050607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.433 [2024-11-27 04:40:49.050638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:01.433 [2024-11-27 04:40:49.050656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.692 [2024-11-27 04:40:49.053407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.692 [2024-11-27 04:40:49.053465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:01.692 BaseBdev4 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.692 spare_malloc 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.692 spare_delay 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.692 [2024-11-27 04:40:49.115464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:01.692 [2024-11-27 04:40:49.115541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.692 [2024-11-27 04:40:49.115574] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:01.692 [2024-11-27 04:40:49.115593] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.692 [2024-11-27 04:40:49.118479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.692 [2024-11-27 04:40:49.118527] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:01.692 spare 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.692 [2024-11-27 04:40:49.123523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.692 [2024-11-27 04:40:49.126002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.692 [2024-11-27 04:40:49.126111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:01.692 [2024-11-27 04:40:49.126219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:01.692 [2024-11-27 04:40:49.126476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:01.692 [2024-11-27 04:40:49.126513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:01.692 [2024-11-27 04:40:49.126852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:01.692 [2024-11-27 04:40:49.127100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:01.692 [2024-11-27 04:40:49.127129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:01.692 [2024-11-27 04:40:49.127322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.692 "name": "raid_bdev1", 00:19:01.692 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:01.692 "strip_size_kb": 0, 00:19:01.692 "state": "online", 00:19:01.692 "raid_level": "raid1", 00:19:01.692 "superblock": true, 00:19:01.692 "num_base_bdevs": 4, 00:19:01.692 "num_base_bdevs_discovered": 4, 00:19:01.692 "num_base_bdevs_operational": 4, 00:19:01.692 "base_bdevs_list": [ 00:19:01.692 { 00:19:01.692 "name": "BaseBdev1", 00:19:01.692 "uuid": "6a05df92-5578-5ec6-bc78-6d1b2c08b748", 00:19:01.692 "is_configured": true, 00:19:01.692 "data_offset": 2048, 00:19:01.692 "data_size": 63488 00:19:01.692 }, 00:19:01.692 { 00:19:01.692 "name": "BaseBdev2", 00:19:01.692 "uuid": "8974e037-efb2-5716-a454-72a682a07584", 00:19:01.692 "is_configured": true, 00:19:01.692 "data_offset": 2048, 00:19:01.692 "data_size": 63488 00:19:01.692 }, 00:19:01.692 { 00:19:01.692 "name": "BaseBdev3", 00:19:01.692 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:01.692 "is_configured": true, 00:19:01.692 "data_offset": 2048, 00:19:01.692 "data_size": 63488 00:19:01.692 }, 00:19:01.692 { 00:19:01.692 "name": "BaseBdev4", 00:19:01.692 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:01.692 "is_configured": true, 00:19:01.692 "data_offset": 2048, 00:19:01.692 "data_size": 63488 00:19:01.692 } 00:19:01.692 ] 00:19:01.692 }' 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.692 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:02.258 [2024-11-27 04:40:49.632113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:02.258 04:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:02.517 [2024-11-27 04:40:50.023864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:02.517 /dev/nbd0 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:02.517 1+0 records in 00:19:02.517 1+0 records out 00:19:02.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389009 s, 10.5 MB/s 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:02.517 04:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:12.489 63488+0 records in 00:19:12.489 63488+0 records out 00:19:12.489 32505856 bytes (33 MB, 31 MiB) copied, 8.55317 s, 3.8 MB/s 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:12.489 [2024-11-27 04:40:58.905341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.489 [2024-11-27 04:40:58.917601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.489 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.489 "name": "raid_bdev1", 00:19:12.489 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:12.489 "strip_size_kb": 0, 00:19:12.489 "state": "online", 00:19:12.489 "raid_level": "raid1", 00:19:12.489 "superblock": true, 00:19:12.489 "num_base_bdevs": 4, 00:19:12.489 "num_base_bdevs_discovered": 3, 00:19:12.489 "num_base_bdevs_operational": 3, 00:19:12.489 "base_bdevs_list": [ 00:19:12.489 { 00:19:12.489 "name": null, 00:19:12.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.489 "is_configured": false, 00:19:12.489 "data_offset": 0, 00:19:12.489 "data_size": 63488 00:19:12.489 }, 00:19:12.489 { 00:19:12.489 "name": "BaseBdev2", 00:19:12.489 "uuid": "8974e037-efb2-5716-a454-72a682a07584", 00:19:12.489 "is_configured": true, 00:19:12.489 "data_offset": 2048, 00:19:12.489 "data_size": 63488 00:19:12.489 }, 00:19:12.489 { 00:19:12.489 "name": "BaseBdev3", 00:19:12.489 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:12.489 "is_configured": true, 00:19:12.489 "data_offset": 2048, 00:19:12.489 "data_size": 63488 00:19:12.489 }, 00:19:12.489 { 00:19:12.489 "name": "BaseBdev4", 00:19:12.489 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:12.489 "is_configured": true, 00:19:12.489 "data_offset": 2048, 00:19:12.489 "data_size": 63488 00:19:12.490 } 00:19:12.490 ] 00:19:12.490 }' 00:19:12.490 04:40:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.490 04:40:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.490 04:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:12.490 04:40:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.490 04:40:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.490 [2024-11-27 04:40:59.433767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.490 [2024-11-27 04:40:59.448164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:19:12.490 04:40:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.490 04:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:12.490 [2024-11-27 04:40:59.450691] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.057 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.057 "name": "raid_bdev1", 00:19:13.057 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:13.057 "strip_size_kb": 0, 00:19:13.057 "state": "online", 00:19:13.057 "raid_level": "raid1", 00:19:13.057 "superblock": true, 00:19:13.057 "num_base_bdevs": 4, 00:19:13.057 "num_base_bdevs_discovered": 4, 00:19:13.057 "num_base_bdevs_operational": 4, 00:19:13.057 "process": { 00:19:13.057 "type": "rebuild", 00:19:13.057 "target": "spare", 00:19:13.057 "progress": { 00:19:13.057 "blocks": 20480, 00:19:13.057 "percent": 32 00:19:13.057 } 00:19:13.057 }, 00:19:13.057 "base_bdevs_list": [ 00:19:13.057 { 00:19:13.057 "name": "spare", 00:19:13.057 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:13.057 "is_configured": true, 00:19:13.058 "data_offset": 2048, 00:19:13.058 "data_size": 63488 00:19:13.058 }, 00:19:13.058 { 00:19:13.058 "name": "BaseBdev2", 00:19:13.058 "uuid": "8974e037-efb2-5716-a454-72a682a07584", 00:19:13.058 "is_configured": true, 00:19:13.058 "data_offset": 2048, 00:19:13.058 "data_size": 63488 00:19:13.058 }, 00:19:13.058 { 00:19:13.058 "name": "BaseBdev3", 00:19:13.058 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:13.058 "is_configured": true, 00:19:13.058 "data_offset": 2048, 00:19:13.058 "data_size": 63488 00:19:13.058 }, 00:19:13.058 { 00:19:13.058 "name": "BaseBdev4", 00:19:13.058 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:13.058 "is_configured": true, 00:19:13.058 "data_offset": 2048, 00:19:13.058 "data_size": 63488 00:19:13.058 } 00:19:13.058 ] 00:19:13.058 }' 00:19:13.058 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.058 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.058 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.058 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.058 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:13.058 04:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.058 04:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.058 [2024-11-27 04:41:00.611758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.058 [2024-11-27 04:41:00.659755] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.058 [2024-11-27 04:41:00.659884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.058 [2024-11-27 04:41:00.659914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.058 [2024-11-27 04:41:00.659931] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.317 "name": "raid_bdev1", 00:19:13.317 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:13.317 "strip_size_kb": 0, 00:19:13.317 "state": "online", 00:19:13.317 "raid_level": "raid1", 00:19:13.317 "superblock": true, 00:19:13.317 "num_base_bdevs": 4, 00:19:13.317 "num_base_bdevs_discovered": 3, 00:19:13.317 "num_base_bdevs_operational": 3, 00:19:13.317 "base_bdevs_list": [ 00:19:13.317 { 00:19:13.317 "name": null, 00:19:13.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.317 "is_configured": false, 00:19:13.317 "data_offset": 0, 00:19:13.317 "data_size": 63488 00:19:13.317 }, 00:19:13.317 { 00:19:13.317 "name": "BaseBdev2", 00:19:13.317 "uuid": "8974e037-efb2-5716-a454-72a682a07584", 00:19:13.317 "is_configured": true, 00:19:13.317 "data_offset": 2048, 00:19:13.317 "data_size": 63488 00:19:13.317 }, 00:19:13.317 { 00:19:13.317 "name": "BaseBdev3", 00:19:13.317 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:13.317 "is_configured": true, 00:19:13.317 "data_offset": 2048, 00:19:13.317 "data_size": 63488 00:19:13.317 }, 00:19:13.317 { 00:19:13.317 "name": "BaseBdev4", 00:19:13.317 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:13.317 "is_configured": true, 00:19:13.317 "data_offset": 2048, 00:19:13.317 "data_size": 63488 00:19:13.317 } 00:19:13.317 ] 00:19:13.317 }' 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.317 04:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.885 "name": "raid_bdev1", 00:19:13.885 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:13.885 "strip_size_kb": 0, 00:19:13.885 "state": "online", 00:19:13.885 "raid_level": "raid1", 00:19:13.885 "superblock": true, 00:19:13.885 "num_base_bdevs": 4, 00:19:13.885 "num_base_bdevs_discovered": 3, 00:19:13.885 "num_base_bdevs_operational": 3, 00:19:13.885 "base_bdevs_list": [ 00:19:13.885 { 00:19:13.885 "name": null, 00:19:13.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.885 "is_configured": false, 00:19:13.885 "data_offset": 0, 00:19:13.885 "data_size": 63488 00:19:13.885 }, 00:19:13.885 { 00:19:13.885 "name": "BaseBdev2", 00:19:13.885 "uuid": "8974e037-efb2-5716-a454-72a682a07584", 00:19:13.885 "is_configured": true, 00:19:13.885 "data_offset": 2048, 00:19:13.885 "data_size": 63488 00:19:13.885 }, 00:19:13.885 { 00:19:13.885 "name": "BaseBdev3", 00:19:13.885 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:13.885 "is_configured": true, 00:19:13.885 "data_offset": 2048, 00:19:13.885 "data_size": 63488 00:19:13.885 }, 00:19:13.885 { 00:19:13.885 "name": "BaseBdev4", 00:19:13.885 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:13.885 "is_configured": true, 00:19:13.885 "data_offset": 2048, 00:19:13.885 "data_size": 63488 00:19:13.885 } 00:19:13.885 ] 00:19:13.885 }' 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.885 [2024-11-27 04:41:01.375863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.885 [2024-11-27 04:41:01.389575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.885 04:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:13.885 [2024-11-27 04:41:01.392389] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:14.818 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.818 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.818 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.818 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.818 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.818 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.818 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.818 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.818 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.818 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.121 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.121 "name": "raid_bdev1", 00:19:15.121 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:15.121 "strip_size_kb": 0, 00:19:15.121 "state": "online", 00:19:15.121 "raid_level": "raid1", 00:19:15.121 "superblock": true, 00:19:15.121 "num_base_bdevs": 4, 00:19:15.121 "num_base_bdevs_discovered": 4, 00:19:15.121 "num_base_bdevs_operational": 4, 00:19:15.121 "process": { 00:19:15.122 "type": "rebuild", 00:19:15.122 "target": "spare", 00:19:15.122 "progress": { 00:19:15.122 "blocks": 20480, 00:19:15.122 "percent": 32 00:19:15.122 } 00:19:15.122 }, 00:19:15.122 "base_bdevs_list": [ 00:19:15.122 { 00:19:15.122 "name": "spare", 00:19:15.122 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:15.122 "is_configured": true, 00:19:15.122 "data_offset": 2048, 00:19:15.122 "data_size": 63488 00:19:15.122 }, 00:19:15.122 { 00:19:15.122 "name": "BaseBdev2", 00:19:15.122 "uuid": "8974e037-efb2-5716-a454-72a682a07584", 00:19:15.122 "is_configured": true, 00:19:15.122 "data_offset": 2048, 00:19:15.122 "data_size": 63488 00:19:15.122 }, 00:19:15.122 { 00:19:15.122 "name": "BaseBdev3", 00:19:15.122 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:15.122 "is_configured": true, 00:19:15.122 "data_offset": 2048, 00:19:15.122 "data_size": 63488 00:19:15.122 }, 00:19:15.122 { 00:19:15.122 "name": "BaseBdev4", 00:19:15.122 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:15.122 "is_configured": true, 00:19:15.122 "data_offset": 2048, 00:19:15.122 "data_size": 63488 00:19:15.122 } 00:19:15.122 ] 00:19:15.122 }' 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:15.122 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.122 [2024-11-27 04:41:02.581838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:15.122 [2024-11-27 04:41:02.701941] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.122 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.381 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.381 "name": "raid_bdev1", 00:19:15.381 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:15.381 "strip_size_kb": 0, 00:19:15.381 "state": "online", 00:19:15.381 "raid_level": "raid1", 00:19:15.381 "superblock": true, 00:19:15.381 "num_base_bdevs": 4, 00:19:15.381 "num_base_bdevs_discovered": 3, 00:19:15.381 "num_base_bdevs_operational": 3, 00:19:15.381 "process": { 00:19:15.381 "type": "rebuild", 00:19:15.381 "target": "spare", 00:19:15.381 "progress": { 00:19:15.381 "blocks": 24576, 00:19:15.381 "percent": 38 00:19:15.381 } 00:19:15.381 }, 00:19:15.381 "base_bdevs_list": [ 00:19:15.381 { 00:19:15.381 "name": "spare", 00:19:15.381 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:15.381 "is_configured": true, 00:19:15.381 "data_offset": 2048, 00:19:15.381 "data_size": 63488 00:19:15.381 }, 00:19:15.381 { 00:19:15.381 "name": null, 00:19:15.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.381 "is_configured": false, 00:19:15.381 "data_offset": 0, 00:19:15.381 "data_size": 63488 00:19:15.381 }, 00:19:15.381 { 00:19:15.381 "name": "BaseBdev3", 00:19:15.381 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:15.381 "is_configured": true, 00:19:15.381 "data_offset": 2048, 00:19:15.381 "data_size": 63488 00:19:15.381 }, 00:19:15.381 { 00:19:15.381 "name": "BaseBdev4", 00:19:15.381 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:15.381 "is_configured": true, 00:19:15.381 "data_offset": 2048, 00:19:15.381 "data_size": 63488 00:19:15.381 } 00:19:15.381 ] 00:19:15.381 }' 00:19:15.381 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.381 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.381 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.381 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.381 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=503 00:19:15.381 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.381 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.381 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.381 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.382 "name": "raid_bdev1", 00:19:15.382 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:15.382 "strip_size_kb": 0, 00:19:15.382 "state": "online", 00:19:15.382 "raid_level": "raid1", 00:19:15.382 "superblock": true, 00:19:15.382 "num_base_bdevs": 4, 00:19:15.382 "num_base_bdevs_discovered": 3, 00:19:15.382 "num_base_bdevs_operational": 3, 00:19:15.382 "process": { 00:19:15.382 "type": "rebuild", 00:19:15.382 "target": "spare", 00:19:15.382 "progress": { 00:19:15.382 "blocks": 26624, 00:19:15.382 "percent": 41 00:19:15.382 } 00:19:15.382 }, 00:19:15.382 "base_bdevs_list": [ 00:19:15.382 { 00:19:15.382 "name": "spare", 00:19:15.382 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:15.382 "is_configured": true, 00:19:15.382 "data_offset": 2048, 00:19:15.382 "data_size": 63488 00:19:15.382 }, 00:19:15.382 { 00:19:15.382 "name": null, 00:19:15.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.382 "is_configured": false, 00:19:15.382 "data_offset": 0, 00:19:15.382 "data_size": 63488 00:19:15.382 }, 00:19:15.382 { 00:19:15.382 "name": "BaseBdev3", 00:19:15.382 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:15.382 "is_configured": true, 00:19:15.382 "data_offset": 2048, 00:19:15.382 "data_size": 63488 00:19:15.382 }, 00:19:15.382 { 00:19:15.382 "name": "BaseBdev4", 00:19:15.382 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:15.382 "is_configured": true, 00:19:15.382 "data_offset": 2048, 00:19:15.382 "data_size": 63488 00:19:15.382 } 00:19:15.382 ] 00:19:15.382 }' 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.382 04:41:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.640 04:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.640 04:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.575 "name": "raid_bdev1", 00:19:16.575 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:16.575 "strip_size_kb": 0, 00:19:16.575 "state": "online", 00:19:16.575 "raid_level": "raid1", 00:19:16.575 "superblock": true, 00:19:16.575 "num_base_bdevs": 4, 00:19:16.575 "num_base_bdevs_discovered": 3, 00:19:16.575 "num_base_bdevs_operational": 3, 00:19:16.575 "process": { 00:19:16.575 "type": "rebuild", 00:19:16.575 "target": "spare", 00:19:16.575 "progress": { 00:19:16.575 "blocks": 51200, 00:19:16.575 "percent": 80 00:19:16.575 } 00:19:16.575 }, 00:19:16.575 "base_bdevs_list": [ 00:19:16.575 { 00:19:16.575 "name": "spare", 00:19:16.575 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:16.575 "is_configured": true, 00:19:16.575 "data_offset": 2048, 00:19:16.575 "data_size": 63488 00:19:16.575 }, 00:19:16.575 { 00:19:16.575 "name": null, 00:19:16.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.575 "is_configured": false, 00:19:16.575 "data_offset": 0, 00:19:16.575 "data_size": 63488 00:19:16.575 }, 00:19:16.575 { 00:19:16.575 "name": "BaseBdev3", 00:19:16.575 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:16.575 "is_configured": true, 00:19:16.575 "data_offset": 2048, 00:19:16.575 "data_size": 63488 00:19:16.575 }, 00:19:16.575 { 00:19:16.575 "name": "BaseBdev4", 00:19:16.575 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:16.575 "is_configured": true, 00:19:16.575 "data_offset": 2048, 00:19:16.575 "data_size": 63488 00:19:16.575 } 00:19:16.575 ] 00:19:16.575 }' 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.575 04:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:17.140 [2024-11-27 04:41:04.616732] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:17.140 [2024-11-27 04:41:04.616861] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:17.140 [2024-11-27 04:41:04.617047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.706 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.706 "name": "raid_bdev1", 00:19:17.706 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:17.706 "strip_size_kb": 0, 00:19:17.706 "state": "online", 00:19:17.706 "raid_level": "raid1", 00:19:17.706 "superblock": true, 00:19:17.706 "num_base_bdevs": 4, 00:19:17.706 "num_base_bdevs_discovered": 3, 00:19:17.706 "num_base_bdevs_operational": 3, 00:19:17.706 "base_bdevs_list": [ 00:19:17.706 { 00:19:17.706 "name": "spare", 00:19:17.706 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:17.706 "is_configured": true, 00:19:17.706 "data_offset": 2048, 00:19:17.706 "data_size": 63488 00:19:17.706 }, 00:19:17.706 { 00:19:17.706 "name": null, 00:19:17.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.706 "is_configured": false, 00:19:17.706 "data_offset": 0, 00:19:17.706 "data_size": 63488 00:19:17.706 }, 00:19:17.706 { 00:19:17.706 "name": "BaseBdev3", 00:19:17.706 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:17.706 "is_configured": true, 00:19:17.706 "data_offset": 2048, 00:19:17.706 "data_size": 63488 00:19:17.706 }, 00:19:17.706 { 00:19:17.706 "name": "BaseBdev4", 00:19:17.706 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:17.706 "is_configured": true, 00:19:17.706 "data_offset": 2048, 00:19:17.706 "data_size": 63488 00:19:17.706 } 00:19:17.706 ] 00:19:17.706 }' 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.707 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.964 "name": "raid_bdev1", 00:19:17.964 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:17.964 "strip_size_kb": 0, 00:19:17.964 "state": "online", 00:19:17.964 "raid_level": "raid1", 00:19:17.964 "superblock": true, 00:19:17.964 "num_base_bdevs": 4, 00:19:17.964 "num_base_bdevs_discovered": 3, 00:19:17.964 "num_base_bdevs_operational": 3, 00:19:17.964 "base_bdevs_list": [ 00:19:17.964 { 00:19:17.964 "name": "spare", 00:19:17.964 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:17.964 "is_configured": true, 00:19:17.964 "data_offset": 2048, 00:19:17.964 "data_size": 63488 00:19:17.964 }, 00:19:17.964 { 00:19:17.964 "name": null, 00:19:17.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.964 "is_configured": false, 00:19:17.964 "data_offset": 0, 00:19:17.964 "data_size": 63488 00:19:17.964 }, 00:19:17.964 { 00:19:17.964 "name": "BaseBdev3", 00:19:17.964 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:17.964 "is_configured": true, 00:19:17.964 "data_offset": 2048, 00:19:17.964 "data_size": 63488 00:19:17.964 }, 00:19:17.964 { 00:19:17.964 "name": "BaseBdev4", 00:19:17.964 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:17.964 "is_configured": true, 00:19:17.964 "data_offset": 2048, 00:19:17.964 "data_size": 63488 00:19:17.964 } 00:19:17.964 ] 00:19:17.964 }' 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.964 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.965 "name": "raid_bdev1", 00:19:17.965 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:17.965 "strip_size_kb": 0, 00:19:17.965 "state": "online", 00:19:17.965 "raid_level": "raid1", 00:19:17.965 "superblock": true, 00:19:17.965 "num_base_bdevs": 4, 00:19:17.965 "num_base_bdevs_discovered": 3, 00:19:17.965 "num_base_bdevs_operational": 3, 00:19:17.965 "base_bdevs_list": [ 00:19:17.965 { 00:19:17.965 "name": "spare", 00:19:17.965 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:17.965 "is_configured": true, 00:19:17.965 "data_offset": 2048, 00:19:17.965 "data_size": 63488 00:19:17.965 }, 00:19:17.965 { 00:19:17.965 "name": null, 00:19:17.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.965 "is_configured": false, 00:19:17.965 "data_offset": 0, 00:19:17.965 "data_size": 63488 00:19:17.965 }, 00:19:17.965 { 00:19:17.965 "name": "BaseBdev3", 00:19:17.965 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:17.965 "is_configured": true, 00:19:17.965 "data_offset": 2048, 00:19:17.965 "data_size": 63488 00:19:17.965 }, 00:19:17.965 { 00:19:17.965 "name": "BaseBdev4", 00:19:17.965 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:17.965 "is_configured": true, 00:19:17.965 "data_offset": 2048, 00:19:17.965 "data_size": 63488 00:19:17.965 } 00:19:17.965 ] 00:19:17.965 }' 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.965 04:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.530 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:18.530 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.530 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.530 [2024-11-27 04:41:06.037118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.530 [2024-11-27 04:41:06.037211] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.531 [2024-11-27 04:41:06.037344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.531 [2024-11-27 04:41:06.037455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.531 [2024-11-27 04:41:06.037483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:18.531 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:18.788 /dev/nbd0 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:18.788 1+0 records in 00:19:18.788 1+0 records out 00:19:18.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318227 s, 12.9 MB/s 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:18.788 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:19.353 /dev/nbd1 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:19.353 1+0 records in 00:19:19.353 1+0 records out 00:19:19.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420686 s, 9.7 MB/s 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.353 04:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:19.921 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:19.921 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:19.921 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:19.921 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.921 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.922 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:19.922 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:19.922 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.922 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.922 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:20.180 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:20.180 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:20.180 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:20.180 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:20.180 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.180 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:20.180 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:20.180 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.180 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.181 [2024-11-27 04:41:07.568663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:20.181 [2024-11-27 04:41:07.568725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.181 [2024-11-27 04:41:07.568759] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:20.181 [2024-11-27 04:41:07.568794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.181 [2024-11-27 04:41:07.571819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.181 [2024-11-27 04:41:07.571874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:20.181 [2024-11-27 04:41:07.571993] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:20.181 [2024-11-27 04:41:07.572056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.181 [2024-11-27 04:41:07.572238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:20.181 [2024-11-27 04:41:07.572378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:20.181 spare 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.181 [2024-11-27 04:41:07.672514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:20.181 [2024-11-27 04:41:07.672568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:20.181 [2024-11-27 04:41:07.673014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:19:20.181 [2024-11-27 04:41:07.673307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:20.181 [2024-11-27 04:41:07.673341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:20.181 [2024-11-27 04:41:07.673584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.181 "name": "raid_bdev1", 00:19:20.181 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:20.181 "strip_size_kb": 0, 00:19:20.181 "state": "online", 00:19:20.181 "raid_level": "raid1", 00:19:20.181 "superblock": true, 00:19:20.181 "num_base_bdevs": 4, 00:19:20.181 "num_base_bdevs_discovered": 3, 00:19:20.181 "num_base_bdevs_operational": 3, 00:19:20.181 "base_bdevs_list": [ 00:19:20.181 { 00:19:20.181 "name": "spare", 00:19:20.181 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:20.181 "is_configured": true, 00:19:20.181 "data_offset": 2048, 00:19:20.181 "data_size": 63488 00:19:20.181 }, 00:19:20.181 { 00:19:20.181 "name": null, 00:19:20.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.181 "is_configured": false, 00:19:20.181 "data_offset": 2048, 00:19:20.181 "data_size": 63488 00:19:20.181 }, 00:19:20.181 { 00:19:20.181 "name": "BaseBdev3", 00:19:20.181 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:20.181 "is_configured": true, 00:19:20.181 "data_offset": 2048, 00:19:20.181 "data_size": 63488 00:19:20.181 }, 00:19:20.181 { 00:19:20.181 "name": "BaseBdev4", 00:19:20.181 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:20.181 "is_configured": true, 00:19:20.181 "data_offset": 2048, 00:19:20.181 "data_size": 63488 00:19:20.181 } 00:19:20.181 ] 00:19:20.181 }' 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.181 04:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.748 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.748 "name": "raid_bdev1", 00:19:20.748 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:20.748 "strip_size_kb": 0, 00:19:20.748 "state": "online", 00:19:20.748 "raid_level": "raid1", 00:19:20.748 "superblock": true, 00:19:20.748 "num_base_bdevs": 4, 00:19:20.748 "num_base_bdevs_discovered": 3, 00:19:20.748 "num_base_bdevs_operational": 3, 00:19:20.748 "base_bdevs_list": [ 00:19:20.748 { 00:19:20.748 "name": "spare", 00:19:20.748 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:20.748 "is_configured": true, 00:19:20.748 "data_offset": 2048, 00:19:20.748 "data_size": 63488 00:19:20.748 }, 00:19:20.748 { 00:19:20.748 "name": null, 00:19:20.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.748 "is_configured": false, 00:19:20.748 "data_offset": 2048, 00:19:20.748 "data_size": 63488 00:19:20.748 }, 00:19:20.748 { 00:19:20.748 "name": "BaseBdev3", 00:19:20.748 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:20.748 "is_configured": true, 00:19:20.748 "data_offset": 2048, 00:19:20.748 "data_size": 63488 00:19:20.748 }, 00:19:20.748 { 00:19:20.748 "name": "BaseBdev4", 00:19:20.749 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:20.749 "is_configured": true, 00:19:20.749 "data_offset": 2048, 00:19:20.749 "data_size": 63488 00:19:20.749 } 00:19:20.749 ] 00:19:20.749 }' 00:19:20.749 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.749 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:20.749 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.749 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:20.749 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.749 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.749 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.749 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:20.749 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.007 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.008 [2024-11-27 04:41:08.389793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.008 "name": "raid_bdev1", 00:19:21.008 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:21.008 "strip_size_kb": 0, 00:19:21.008 "state": "online", 00:19:21.008 "raid_level": "raid1", 00:19:21.008 "superblock": true, 00:19:21.008 "num_base_bdevs": 4, 00:19:21.008 "num_base_bdevs_discovered": 2, 00:19:21.008 "num_base_bdevs_operational": 2, 00:19:21.008 "base_bdevs_list": [ 00:19:21.008 { 00:19:21.008 "name": null, 00:19:21.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.008 "is_configured": false, 00:19:21.008 "data_offset": 0, 00:19:21.008 "data_size": 63488 00:19:21.008 }, 00:19:21.008 { 00:19:21.008 "name": null, 00:19:21.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.008 "is_configured": false, 00:19:21.008 "data_offset": 2048, 00:19:21.008 "data_size": 63488 00:19:21.008 }, 00:19:21.008 { 00:19:21.008 "name": "BaseBdev3", 00:19:21.008 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:21.008 "is_configured": true, 00:19:21.008 "data_offset": 2048, 00:19:21.008 "data_size": 63488 00:19:21.008 }, 00:19:21.008 { 00:19:21.008 "name": "BaseBdev4", 00:19:21.008 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:21.008 "is_configured": true, 00:19:21.008 "data_offset": 2048, 00:19:21.008 "data_size": 63488 00:19:21.008 } 00:19:21.008 ] 00:19:21.008 }' 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.008 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.576 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:21.576 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.576 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.576 [2024-11-27 04:41:08.933983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.576 [2024-11-27 04:41:08.934265] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:21.576 [2024-11-27 04:41:08.934300] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:21.576 [2024-11-27 04:41:08.934350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.576 [2024-11-27 04:41:08.947857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:19:21.576 04:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.576 04:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:21.576 [2024-11-27 04:41:08.950373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.511 04:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.511 04:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.511 04:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.511 04:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.511 04:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.511 04:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.511 04:41:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.511 04:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.511 04:41:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.511 04:41:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.511 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.511 "name": "raid_bdev1", 00:19:22.511 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:22.511 "strip_size_kb": 0, 00:19:22.511 "state": "online", 00:19:22.511 "raid_level": "raid1", 00:19:22.511 "superblock": true, 00:19:22.511 "num_base_bdevs": 4, 00:19:22.511 "num_base_bdevs_discovered": 3, 00:19:22.511 "num_base_bdevs_operational": 3, 00:19:22.511 "process": { 00:19:22.511 "type": "rebuild", 00:19:22.511 "target": "spare", 00:19:22.511 "progress": { 00:19:22.511 "blocks": 20480, 00:19:22.511 "percent": 32 00:19:22.511 } 00:19:22.511 }, 00:19:22.511 "base_bdevs_list": [ 00:19:22.511 { 00:19:22.511 "name": "spare", 00:19:22.511 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:22.511 "is_configured": true, 00:19:22.511 "data_offset": 2048, 00:19:22.511 "data_size": 63488 00:19:22.511 }, 00:19:22.511 { 00:19:22.511 "name": null, 00:19:22.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.511 "is_configured": false, 00:19:22.511 "data_offset": 2048, 00:19:22.511 "data_size": 63488 00:19:22.511 }, 00:19:22.511 { 00:19:22.511 "name": "BaseBdev3", 00:19:22.511 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:22.511 "is_configured": true, 00:19:22.511 "data_offset": 2048, 00:19:22.511 "data_size": 63488 00:19:22.511 }, 00:19:22.511 { 00:19:22.511 "name": "BaseBdev4", 00:19:22.511 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:22.511 "is_configured": true, 00:19:22.511 "data_offset": 2048, 00:19:22.511 "data_size": 63488 00:19:22.511 } 00:19:22.511 ] 00:19:22.511 }' 00:19:22.511 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.511 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.511 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.511 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.511 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:22.511 04:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.511 04:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.511 [2024-11-27 04:41:10.127983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.769 [2024-11-27 04:41:10.159840] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.769 [2024-11-27 04:41:10.159923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.769 [2024-11-27 04:41:10.159954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.769 [2024-11-27 04:41:10.159967] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.769 04:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.770 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.770 "name": "raid_bdev1", 00:19:22.770 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:22.770 "strip_size_kb": 0, 00:19:22.770 "state": "online", 00:19:22.770 "raid_level": "raid1", 00:19:22.770 "superblock": true, 00:19:22.770 "num_base_bdevs": 4, 00:19:22.770 "num_base_bdevs_discovered": 2, 00:19:22.770 "num_base_bdevs_operational": 2, 00:19:22.770 "base_bdevs_list": [ 00:19:22.770 { 00:19:22.770 "name": null, 00:19:22.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.770 "is_configured": false, 00:19:22.770 "data_offset": 0, 00:19:22.770 "data_size": 63488 00:19:22.770 }, 00:19:22.770 { 00:19:22.770 "name": null, 00:19:22.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.770 "is_configured": false, 00:19:22.770 "data_offset": 2048, 00:19:22.770 "data_size": 63488 00:19:22.770 }, 00:19:22.770 { 00:19:22.770 "name": "BaseBdev3", 00:19:22.770 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:22.770 "is_configured": true, 00:19:22.770 "data_offset": 2048, 00:19:22.770 "data_size": 63488 00:19:22.770 }, 00:19:22.770 { 00:19:22.770 "name": "BaseBdev4", 00:19:22.770 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:22.770 "is_configured": true, 00:19:22.770 "data_offset": 2048, 00:19:22.770 "data_size": 63488 00:19:22.770 } 00:19:22.770 ] 00:19:22.770 }' 00:19:22.770 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.770 04:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.335 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:23.335 04:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.335 04:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.335 [2024-11-27 04:41:10.691847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:23.335 [2024-11-27 04:41:10.691941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.335 [2024-11-27 04:41:10.691991] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:23.335 [2024-11-27 04:41:10.692008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.335 [2024-11-27 04:41:10.692732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.335 [2024-11-27 04:41:10.692789] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:23.335 [2024-11-27 04:41:10.692922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:23.335 [2024-11-27 04:41:10.692944] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:23.335 [2024-11-27 04:41:10.692967] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:23.335 [2024-11-27 04:41:10.692998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.335 [2024-11-27 04:41:10.707751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:19:23.335 spare 00:19:23.335 04:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.335 04:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:23.335 [2024-11-27 04:41:10.710634] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.270 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.270 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.270 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.270 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.270 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.271 "name": "raid_bdev1", 00:19:24.271 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:24.271 "strip_size_kb": 0, 00:19:24.271 "state": "online", 00:19:24.271 "raid_level": "raid1", 00:19:24.271 "superblock": true, 00:19:24.271 "num_base_bdevs": 4, 00:19:24.271 "num_base_bdevs_discovered": 3, 00:19:24.271 "num_base_bdevs_operational": 3, 00:19:24.271 "process": { 00:19:24.271 "type": "rebuild", 00:19:24.271 "target": "spare", 00:19:24.271 "progress": { 00:19:24.271 "blocks": 20480, 00:19:24.271 "percent": 32 00:19:24.271 } 00:19:24.271 }, 00:19:24.271 "base_bdevs_list": [ 00:19:24.271 { 00:19:24.271 "name": "spare", 00:19:24.271 "uuid": "1ff713e4-7146-5a69-8e0d-4e509b01cbbf", 00:19:24.271 "is_configured": true, 00:19:24.271 "data_offset": 2048, 00:19:24.271 "data_size": 63488 00:19:24.271 }, 00:19:24.271 { 00:19:24.271 "name": null, 00:19:24.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.271 "is_configured": false, 00:19:24.271 "data_offset": 2048, 00:19:24.271 "data_size": 63488 00:19:24.271 }, 00:19:24.271 { 00:19:24.271 "name": "BaseBdev3", 00:19:24.271 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:24.271 "is_configured": true, 00:19:24.271 "data_offset": 2048, 00:19:24.271 "data_size": 63488 00:19:24.271 }, 00:19:24.271 { 00:19:24.271 "name": "BaseBdev4", 00:19:24.271 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:24.271 "is_configured": true, 00:19:24.271 "data_offset": 2048, 00:19:24.271 "data_size": 63488 00:19:24.271 } 00:19:24.271 ] 00:19:24.271 }' 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.271 04:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.271 [2024-11-27 04:41:11.872202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.529 [2024-11-27 04:41:11.920158] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:24.529 [2024-11-27 04:41:11.920274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.529 [2024-11-27 04:41:11.920301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.529 [2024-11-27 04:41:11.920317] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.530 "name": "raid_bdev1", 00:19:24.530 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:24.530 "strip_size_kb": 0, 00:19:24.530 "state": "online", 00:19:24.530 "raid_level": "raid1", 00:19:24.530 "superblock": true, 00:19:24.530 "num_base_bdevs": 4, 00:19:24.530 "num_base_bdevs_discovered": 2, 00:19:24.530 "num_base_bdevs_operational": 2, 00:19:24.530 "base_bdevs_list": [ 00:19:24.530 { 00:19:24.530 "name": null, 00:19:24.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.530 "is_configured": false, 00:19:24.530 "data_offset": 0, 00:19:24.530 "data_size": 63488 00:19:24.530 }, 00:19:24.530 { 00:19:24.530 "name": null, 00:19:24.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.530 "is_configured": false, 00:19:24.530 "data_offset": 2048, 00:19:24.530 "data_size": 63488 00:19:24.530 }, 00:19:24.530 { 00:19:24.530 "name": "BaseBdev3", 00:19:24.530 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:24.530 "is_configured": true, 00:19:24.530 "data_offset": 2048, 00:19:24.530 "data_size": 63488 00:19:24.530 }, 00:19:24.530 { 00:19:24.530 "name": "BaseBdev4", 00:19:24.530 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:24.530 "is_configured": true, 00:19:24.530 "data_offset": 2048, 00:19:24.530 "data_size": 63488 00:19:24.530 } 00:19:24.530 ] 00:19:24.530 }' 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.530 04:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.098 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.098 "name": "raid_bdev1", 00:19:25.098 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:25.098 "strip_size_kb": 0, 00:19:25.098 "state": "online", 00:19:25.098 "raid_level": "raid1", 00:19:25.098 "superblock": true, 00:19:25.098 "num_base_bdevs": 4, 00:19:25.098 "num_base_bdevs_discovered": 2, 00:19:25.098 "num_base_bdevs_operational": 2, 00:19:25.098 "base_bdevs_list": [ 00:19:25.098 { 00:19:25.098 "name": null, 00:19:25.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.098 "is_configured": false, 00:19:25.098 "data_offset": 0, 00:19:25.098 "data_size": 63488 00:19:25.098 }, 00:19:25.098 { 00:19:25.098 "name": null, 00:19:25.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.098 "is_configured": false, 00:19:25.098 "data_offset": 2048, 00:19:25.098 "data_size": 63488 00:19:25.098 }, 00:19:25.098 { 00:19:25.098 "name": "BaseBdev3", 00:19:25.098 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:25.098 "is_configured": true, 00:19:25.098 "data_offset": 2048, 00:19:25.098 "data_size": 63488 00:19:25.098 }, 00:19:25.098 { 00:19:25.098 "name": "BaseBdev4", 00:19:25.098 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:25.098 "is_configured": true, 00:19:25.098 "data_offset": 2048, 00:19:25.098 "data_size": 63488 00:19:25.098 } 00:19:25.098 ] 00:19:25.098 }' 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.099 [2024-11-27 04:41:12.644720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:25.099 [2024-11-27 04:41:12.644802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.099 [2024-11-27 04:41:12.644834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:25.099 [2024-11-27 04:41:12.644852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.099 [2024-11-27 04:41:12.645435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.099 [2024-11-27 04:41:12.645475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:25.099 [2024-11-27 04:41:12.645578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:25.099 [2024-11-27 04:41:12.645612] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:25.099 [2024-11-27 04:41:12.645625] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:25.099 [2024-11-27 04:41:12.645663] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:25.099 BaseBdev1 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.099 04:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:26.035 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:26.035 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.035 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.035 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.035 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.035 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.035 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.035 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.035 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.035 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.294 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.294 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.294 04:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.294 04:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.294 04:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.294 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.294 "name": "raid_bdev1", 00:19:26.294 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:26.294 "strip_size_kb": 0, 00:19:26.294 "state": "online", 00:19:26.294 "raid_level": "raid1", 00:19:26.294 "superblock": true, 00:19:26.294 "num_base_bdevs": 4, 00:19:26.294 "num_base_bdevs_discovered": 2, 00:19:26.294 "num_base_bdevs_operational": 2, 00:19:26.294 "base_bdevs_list": [ 00:19:26.294 { 00:19:26.294 "name": null, 00:19:26.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.294 "is_configured": false, 00:19:26.294 "data_offset": 0, 00:19:26.294 "data_size": 63488 00:19:26.294 }, 00:19:26.294 { 00:19:26.294 "name": null, 00:19:26.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.294 "is_configured": false, 00:19:26.294 "data_offset": 2048, 00:19:26.294 "data_size": 63488 00:19:26.294 }, 00:19:26.294 { 00:19:26.294 "name": "BaseBdev3", 00:19:26.294 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:26.294 "is_configured": true, 00:19:26.294 "data_offset": 2048, 00:19:26.294 "data_size": 63488 00:19:26.294 }, 00:19:26.294 { 00:19:26.294 "name": "BaseBdev4", 00:19:26.294 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:26.294 "is_configured": true, 00:19:26.294 "data_offset": 2048, 00:19:26.294 "data_size": 63488 00:19:26.294 } 00:19:26.294 ] 00:19:26.294 }' 00:19:26.294 04:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.294 04:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.553 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.553 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.553 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.553 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.553 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.553 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.553 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.553 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.553 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.553 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.812 "name": "raid_bdev1", 00:19:26.812 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:26.812 "strip_size_kb": 0, 00:19:26.812 "state": "online", 00:19:26.812 "raid_level": "raid1", 00:19:26.812 "superblock": true, 00:19:26.812 "num_base_bdevs": 4, 00:19:26.812 "num_base_bdevs_discovered": 2, 00:19:26.812 "num_base_bdevs_operational": 2, 00:19:26.812 "base_bdevs_list": [ 00:19:26.812 { 00:19:26.812 "name": null, 00:19:26.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.812 "is_configured": false, 00:19:26.812 "data_offset": 0, 00:19:26.812 "data_size": 63488 00:19:26.812 }, 00:19:26.812 { 00:19:26.812 "name": null, 00:19:26.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.812 "is_configured": false, 00:19:26.812 "data_offset": 2048, 00:19:26.812 "data_size": 63488 00:19:26.812 }, 00:19:26.812 { 00:19:26.812 "name": "BaseBdev3", 00:19:26.812 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:26.812 "is_configured": true, 00:19:26.812 "data_offset": 2048, 00:19:26.812 "data_size": 63488 00:19:26.812 }, 00:19:26.812 { 00:19:26.812 "name": "BaseBdev4", 00:19:26.812 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:26.812 "is_configured": true, 00:19:26.812 "data_offset": 2048, 00:19:26.812 "data_size": 63488 00:19:26.812 } 00:19:26.812 ] 00:19:26.812 }' 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.812 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.812 [2024-11-27 04:41:14.305270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.812 [2024-11-27 04:41:14.305537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:26.812 [2024-11-27 04:41:14.305570] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:26.812 request: 00:19:26.813 { 00:19:26.813 "base_bdev": "BaseBdev1", 00:19:26.813 "raid_bdev": "raid_bdev1", 00:19:26.813 "method": "bdev_raid_add_base_bdev", 00:19:26.813 "req_id": 1 00:19:26.813 } 00:19:26.813 Got JSON-RPC error response 00:19:26.813 response: 00:19:26.813 { 00:19:26.813 "code": -22, 00:19:26.813 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:26.813 } 00:19:26.813 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:26.813 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:26.813 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.813 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.813 04:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.813 04:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.749 04:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.008 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.008 "name": "raid_bdev1", 00:19:28.008 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:28.008 "strip_size_kb": 0, 00:19:28.008 "state": "online", 00:19:28.008 "raid_level": "raid1", 00:19:28.008 "superblock": true, 00:19:28.008 "num_base_bdevs": 4, 00:19:28.008 "num_base_bdevs_discovered": 2, 00:19:28.008 "num_base_bdevs_operational": 2, 00:19:28.008 "base_bdevs_list": [ 00:19:28.008 { 00:19:28.008 "name": null, 00:19:28.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.008 "is_configured": false, 00:19:28.008 "data_offset": 0, 00:19:28.008 "data_size": 63488 00:19:28.008 }, 00:19:28.008 { 00:19:28.008 "name": null, 00:19:28.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.008 "is_configured": false, 00:19:28.008 "data_offset": 2048, 00:19:28.008 "data_size": 63488 00:19:28.008 }, 00:19:28.008 { 00:19:28.008 "name": "BaseBdev3", 00:19:28.008 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:28.008 "is_configured": true, 00:19:28.008 "data_offset": 2048, 00:19:28.008 "data_size": 63488 00:19:28.008 }, 00:19:28.008 { 00:19:28.008 "name": "BaseBdev4", 00:19:28.008 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:28.008 "is_configured": true, 00:19:28.008 "data_offset": 2048, 00:19:28.008 "data_size": 63488 00:19:28.008 } 00:19:28.008 ] 00:19:28.008 }' 00:19:28.008 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.008 04:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.266 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:28.266 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.266 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:28.266 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:28.266 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.266 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.266 04:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.266 04:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.266 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.266 04:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.525 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.525 "name": "raid_bdev1", 00:19:28.525 "uuid": "6fa05bc3-6e54-43d4-ac34-ab569dd46639", 00:19:28.525 "strip_size_kb": 0, 00:19:28.525 "state": "online", 00:19:28.525 "raid_level": "raid1", 00:19:28.525 "superblock": true, 00:19:28.525 "num_base_bdevs": 4, 00:19:28.525 "num_base_bdevs_discovered": 2, 00:19:28.525 "num_base_bdevs_operational": 2, 00:19:28.525 "base_bdevs_list": [ 00:19:28.525 { 00:19:28.525 "name": null, 00:19:28.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.525 "is_configured": false, 00:19:28.525 "data_offset": 0, 00:19:28.525 "data_size": 63488 00:19:28.525 }, 00:19:28.525 { 00:19:28.525 "name": null, 00:19:28.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.525 "is_configured": false, 00:19:28.525 "data_offset": 2048, 00:19:28.525 "data_size": 63488 00:19:28.525 }, 00:19:28.525 { 00:19:28.525 "name": "BaseBdev3", 00:19:28.525 "uuid": "f8ef2363-b026-57ec-a21f-30e2590a5ef5", 00:19:28.525 "is_configured": true, 00:19:28.525 "data_offset": 2048, 00:19:28.525 "data_size": 63488 00:19:28.525 }, 00:19:28.525 { 00:19:28.525 "name": "BaseBdev4", 00:19:28.525 "uuid": "72cc9552-d977-5474-942c-353fcbc863fa", 00:19:28.525 "is_configured": true, 00:19:28.525 "data_offset": 2048, 00:19:28.525 "data_size": 63488 00:19:28.525 } 00:19:28.525 ] 00:19:28.525 }' 00:19:28.525 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.525 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.525 04:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78412 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78412 ']' 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78412 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78412 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.525 killing process with pid 78412 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78412' 00:19:28.525 04:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78412 00:19:28.525 Received shutdown signal, test time was about 60.000000 seconds 00:19:28.525 00:19:28.525 Latency(us) 00:19:28.525 [2024-11-27T04:41:16.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.525 [2024-11-27T04:41:16.148Z] =================================================================================================================== 00:19:28.526 [2024-11-27T04:41:16.149Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.526 [2024-11-27 04:41:16.047030] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:28.526 04:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78412 00:19:28.526 [2024-11-27 04:41:16.047183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.526 [2024-11-27 04:41:16.047303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.526 [2024-11-27 04:41:16.047332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:29.092 [2024-11-27 04:41:16.493882] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:30.028 ************************************ 00:19:30.028 END TEST raid_rebuild_test_sb 00:19:30.028 ************************************ 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:30.028 00:19:30.028 real 0m29.814s 00:19:30.028 user 0m36.582s 00:19:30.028 sys 0m4.140s 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.028 04:41:17 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:19:30.028 04:41:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:30.028 04:41:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.028 04:41:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.028 ************************************ 00:19:30.028 START TEST raid_rebuild_test_io 00:19:30.028 ************************************ 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79218 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79218 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79218 ']' 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.028 04:41:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.287 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:30.287 Zero copy mechanism will not be used. 00:19:30.287 [2024-11-27 04:41:17.680423] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:30.287 [2024-11-27 04:41:17.680579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79218 ] 00:19:30.287 [2024-11-27 04:41:17.857027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.545 [2024-11-27 04:41:18.004971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.803 [2024-11-27 04:41:18.209253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.803 [2024-11-27 04:41:18.209336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 BaseBdev1_malloc 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 [2024-11-27 04:41:18.739927] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:31.370 [2024-11-27 04:41:18.740000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.370 [2024-11-27 04:41:18.740031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:31.370 [2024-11-27 04:41:18.740050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.370 [2024-11-27 04:41:18.742863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.370 [2024-11-27 04:41:18.742914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:31.370 BaseBdev1 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 BaseBdev2_malloc 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 [2024-11-27 04:41:18.792073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:31.370 [2024-11-27 04:41:18.792148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.370 [2024-11-27 04:41:18.792191] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:31.370 [2024-11-27 04:41:18.792209] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.370 [2024-11-27 04:41:18.794964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.370 [2024-11-27 04:41:18.795010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:31.370 BaseBdev2 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 BaseBdev3_malloc 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 [2024-11-27 04:41:18.853046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:31.370 [2024-11-27 04:41:18.853115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.370 [2024-11-27 04:41:18.853156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:31.370 [2024-11-27 04:41:18.853178] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.370 [2024-11-27 04:41:18.855999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.370 [2024-11-27 04:41:18.856052] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:31.370 BaseBdev3 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 BaseBdev4_malloc 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 [2024-11-27 04:41:18.910174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:31.370 [2024-11-27 04:41:18.910249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.370 [2024-11-27 04:41:18.910280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:31.370 [2024-11-27 04:41:18.910300] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.370 [2024-11-27 04:41:18.913079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.370 [2024-11-27 04:41:18.913129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:31.370 BaseBdev4 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 spare_malloc 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 spare_delay 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 [2024-11-27 04:41:18.975171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:31.370 [2024-11-27 04:41:18.975241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.370 [2024-11-27 04:41:18.975269] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:31.370 [2024-11-27 04:41:18.975286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.370 [2024-11-27 04:41:18.978150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.370 [2024-11-27 04:41:18.978203] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:31.370 spare 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 [2024-11-27 04:41:18.983217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.370 [2024-11-27 04:41:18.985674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:31.370 [2024-11-27 04:41:18.985809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.370 [2024-11-27 04:41:18.985907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:31.370 [2024-11-27 04:41:18.986029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:31.370 [2024-11-27 04:41:18.986085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:31.370 [2024-11-27 04:41:18.986447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:31.370 [2024-11-27 04:41:18.986692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:31.370 [2024-11-27 04:41:18.986723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:31.370 [2024-11-27 04:41:18.986939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.370 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.371 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.371 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.371 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.629 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.629 04:41:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.629 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.629 04:41:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.629 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.629 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.629 "name": "raid_bdev1", 00:19:31.629 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:31.629 "strip_size_kb": 0, 00:19:31.629 "state": "online", 00:19:31.629 "raid_level": "raid1", 00:19:31.629 "superblock": false, 00:19:31.629 "num_base_bdevs": 4, 00:19:31.629 "num_base_bdevs_discovered": 4, 00:19:31.629 "num_base_bdevs_operational": 4, 00:19:31.629 "base_bdevs_list": [ 00:19:31.629 { 00:19:31.629 "name": "BaseBdev1", 00:19:31.629 "uuid": "5d939270-21cc-5674-b748-80942aeb32a8", 00:19:31.629 "is_configured": true, 00:19:31.629 "data_offset": 0, 00:19:31.629 "data_size": 65536 00:19:31.629 }, 00:19:31.629 { 00:19:31.629 "name": "BaseBdev2", 00:19:31.629 "uuid": "01f7e8db-a879-51b3-b897-062f743c79c6", 00:19:31.629 "is_configured": true, 00:19:31.629 "data_offset": 0, 00:19:31.629 "data_size": 65536 00:19:31.629 }, 00:19:31.629 { 00:19:31.629 "name": "BaseBdev3", 00:19:31.629 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:31.630 "is_configured": true, 00:19:31.630 "data_offset": 0, 00:19:31.630 "data_size": 65536 00:19:31.630 }, 00:19:31.630 { 00:19:31.630 "name": "BaseBdev4", 00:19:31.630 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:31.630 "is_configured": true, 00:19:31.630 "data_offset": 0, 00:19:31.630 "data_size": 65536 00:19:31.630 } 00:19:31.630 ] 00:19:31.630 }' 00:19:31.630 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.630 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.195 [2024-11-27 04:41:19.519843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.195 [2024-11-27 04:41:19.647398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.195 "name": "raid_bdev1", 00:19:32.195 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:32.195 "strip_size_kb": 0, 00:19:32.195 "state": "online", 00:19:32.195 "raid_level": "raid1", 00:19:32.195 "superblock": false, 00:19:32.195 "num_base_bdevs": 4, 00:19:32.195 "num_base_bdevs_discovered": 3, 00:19:32.195 "num_base_bdevs_operational": 3, 00:19:32.195 "base_bdevs_list": [ 00:19:32.195 { 00:19:32.195 "name": null, 00:19:32.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.195 "is_configured": false, 00:19:32.195 "data_offset": 0, 00:19:32.195 "data_size": 65536 00:19:32.195 }, 00:19:32.195 { 00:19:32.195 "name": "BaseBdev2", 00:19:32.195 "uuid": "01f7e8db-a879-51b3-b897-062f743c79c6", 00:19:32.195 "is_configured": true, 00:19:32.195 "data_offset": 0, 00:19:32.195 "data_size": 65536 00:19:32.195 }, 00:19:32.195 { 00:19:32.195 "name": "BaseBdev3", 00:19:32.195 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:32.195 "is_configured": true, 00:19:32.195 "data_offset": 0, 00:19:32.195 "data_size": 65536 00:19:32.195 }, 00:19:32.195 { 00:19:32.195 "name": "BaseBdev4", 00:19:32.195 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:32.195 "is_configured": true, 00:19:32.195 "data_offset": 0, 00:19:32.195 "data_size": 65536 00:19:32.195 } 00:19:32.195 ] 00:19:32.195 }' 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.195 04:41:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.195 [2024-11-27 04:41:19.779557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:32.195 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:32.195 Zero copy mechanism will not be used. 00:19:32.195 Running I/O for 60 seconds... 00:19:32.763 04:41:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:32.763 04:41:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.763 04:41:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.763 [2024-11-27 04:41:20.169640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.763 04:41:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.763 04:41:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:32.763 [2024-11-27 04:41:20.265221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:32.763 [2024-11-27 04:41:20.267930] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:33.022 [2024-11-27 04:41:20.389504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:33.022 [2024-11-27 04:41:20.390212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:33.022 [2024-11-27 04:41:20.614353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:33.022 [2024-11-27 04:41:20.615299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:33.540 145.00 IOPS, 435.00 MiB/s [2024-11-27T04:41:21.163Z] [2024-11-27 04:41:20.966302] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.798 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.798 "name": "raid_bdev1", 00:19:33.798 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:33.798 "strip_size_kb": 0, 00:19:33.798 "state": "online", 00:19:33.798 "raid_level": "raid1", 00:19:33.798 "superblock": false, 00:19:33.798 "num_base_bdevs": 4, 00:19:33.798 "num_base_bdevs_discovered": 4, 00:19:33.798 "num_base_bdevs_operational": 4, 00:19:33.798 "process": { 00:19:33.798 "type": "rebuild", 00:19:33.798 "target": "spare", 00:19:33.798 "progress": { 00:19:33.798 "blocks": 10240, 00:19:33.798 "percent": 15 00:19:33.798 } 00:19:33.798 }, 00:19:33.798 "base_bdevs_list": [ 00:19:33.798 { 00:19:33.798 "name": "spare", 00:19:33.798 "uuid": "63bd131b-8134-516a-879b-44a296880c16", 00:19:33.798 "is_configured": true, 00:19:33.798 "data_offset": 0, 00:19:33.798 "data_size": 65536 00:19:33.798 }, 00:19:33.798 { 00:19:33.798 "name": "BaseBdev2", 00:19:33.798 "uuid": "01f7e8db-a879-51b3-b897-062f743c79c6", 00:19:33.798 "is_configured": true, 00:19:33.799 "data_offset": 0, 00:19:33.799 "data_size": 65536 00:19:33.799 }, 00:19:33.799 { 00:19:33.799 "name": "BaseBdev3", 00:19:33.799 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:33.799 "is_configured": true, 00:19:33.799 "data_offset": 0, 00:19:33.799 "data_size": 65536 00:19:33.799 }, 00:19:33.799 { 00:19:33.799 "name": "BaseBdev4", 00:19:33.799 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:33.799 "is_configured": true, 00:19:33.799 "data_offset": 0, 00:19:33.799 "data_size": 65536 00:19:33.799 } 00:19:33.799 ] 00:19:33.799 }' 00:19:33.799 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.799 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.799 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.799 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.799 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:33.799 04:41:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.799 04:41:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.799 [2024-11-27 04:41:21.381160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.057 [2024-11-27 04:41:21.553694] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:34.057 [2024-11-27 04:41:21.558430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.057 [2024-11-27 04:41:21.558515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.057 [2024-11-27 04:41:21.558536] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:34.057 [2024-11-27 04:41:21.582021] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.057 "name": "raid_bdev1", 00:19:34.057 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:34.057 "strip_size_kb": 0, 00:19:34.057 "state": "online", 00:19:34.057 "raid_level": "raid1", 00:19:34.057 "superblock": false, 00:19:34.057 "num_base_bdevs": 4, 00:19:34.057 "num_base_bdevs_discovered": 3, 00:19:34.057 "num_base_bdevs_operational": 3, 00:19:34.057 "base_bdevs_list": [ 00:19:34.057 { 00:19:34.057 "name": null, 00:19:34.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.057 "is_configured": false, 00:19:34.057 "data_offset": 0, 00:19:34.057 "data_size": 65536 00:19:34.057 }, 00:19:34.057 { 00:19:34.057 "name": "BaseBdev2", 00:19:34.057 "uuid": "01f7e8db-a879-51b3-b897-062f743c79c6", 00:19:34.057 "is_configured": true, 00:19:34.057 "data_offset": 0, 00:19:34.057 "data_size": 65536 00:19:34.057 }, 00:19:34.057 { 00:19:34.057 "name": "BaseBdev3", 00:19:34.057 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:34.057 "is_configured": true, 00:19:34.057 "data_offset": 0, 00:19:34.057 "data_size": 65536 00:19:34.057 }, 00:19:34.057 { 00:19:34.057 "name": "BaseBdev4", 00:19:34.057 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:34.057 "is_configured": true, 00:19:34.057 "data_offset": 0, 00:19:34.057 "data_size": 65536 00:19:34.057 } 00:19:34.057 ] 00:19:34.057 }' 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.057 04:41:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.575 117.00 IOPS, 351.00 MiB/s [2024-11-27T04:41:22.198Z] 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.575 "name": "raid_bdev1", 00:19:34.575 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:34.575 "strip_size_kb": 0, 00:19:34.575 "state": "online", 00:19:34.575 "raid_level": "raid1", 00:19:34.575 "superblock": false, 00:19:34.575 "num_base_bdevs": 4, 00:19:34.575 "num_base_bdevs_discovered": 3, 00:19:34.575 "num_base_bdevs_operational": 3, 00:19:34.575 "base_bdevs_list": [ 00:19:34.575 { 00:19:34.575 "name": null, 00:19:34.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.575 "is_configured": false, 00:19:34.575 "data_offset": 0, 00:19:34.575 "data_size": 65536 00:19:34.575 }, 00:19:34.575 { 00:19:34.575 "name": "BaseBdev2", 00:19:34.575 "uuid": "01f7e8db-a879-51b3-b897-062f743c79c6", 00:19:34.575 "is_configured": true, 00:19:34.575 "data_offset": 0, 00:19:34.575 "data_size": 65536 00:19:34.575 }, 00:19:34.575 { 00:19:34.575 "name": "BaseBdev3", 00:19:34.575 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:34.575 "is_configured": true, 00:19:34.575 "data_offset": 0, 00:19:34.575 "data_size": 65536 00:19:34.575 }, 00:19:34.575 { 00:19:34.575 "name": "BaseBdev4", 00:19:34.575 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:34.575 "is_configured": true, 00:19:34.575 "data_offset": 0, 00:19:34.575 "data_size": 65536 00:19:34.575 } 00:19:34.575 ] 00:19:34.575 }' 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:34.575 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.835 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:34.835 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:34.835 04:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.835 04:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.835 [2024-11-27 04:41:22.254882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.835 04:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.835 04:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:34.835 [2024-11-27 04:41:22.341569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:34.835 [2024-11-27 04:41:22.344307] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.093 [2024-11-27 04:41:22.466828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:35.093 [2024-11-27 04:41:22.468515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:35.351 [2024-11-27 04:41:22.735476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:35.609 121.67 IOPS, 365.00 MiB/s [2024-11-27T04:41:23.232Z] [2024-11-27 04:41:23.077425] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:35.887 [2024-11-27 04:41:23.303538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:35.887 [2024-11-27 04:41:23.304487] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.887 "name": "raid_bdev1", 00:19:35.887 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:35.887 "strip_size_kb": 0, 00:19:35.887 "state": "online", 00:19:35.887 "raid_level": "raid1", 00:19:35.887 "superblock": false, 00:19:35.887 "num_base_bdevs": 4, 00:19:35.887 "num_base_bdevs_discovered": 4, 00:19:35.887 "num_base_bdevs_operational": 4, 00:19:35.887 "process": { 00:19:35.887 "type": "rebuild", 00:19:35.887 "target": "spare", 00:19:35.887 "progress": { 00:19:35.887 "blocks": 10240, 00:19:35.887 "percent": 15 00:19:35.887 } 00:19:35.887 }, 00:19:35.887 "base_bdevs_list": [ 00:19:35.887 { 00:19:35.887 "name": "spare", 00:19:35.887 "uuid": "63bd131b-8134-516a-879b-44a296880c16", 00:19:35.887 "is_configured": true, 00:19:35.887 "data_offset": 0, 00:19:35.887 "data_size": 65536 00:19:35.887 }, 00:19:35.887 { 00:19:35.887 "name": "BaseBdev2", 00:19:35.887 "uuid": "01f7e8db-a879-51b3-b897-062f743c79c6", 00:19:35.887 "is_configured": true, 00:19:35.887 "data_offset": 0, 00:19:35.887 "data_size": 65536 00:19:35.887 }, 00:19:35.887 { 00:19:35.887 "name": "BaseBdev3", 00:19:35.887 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:35.887 "is_configured": true, 00:19:35.887 "data_offset": 0, 00:19:35.887 "data_size": 65536 00:19:35.887 }, 00:19:35.887 { 00:19:35.887 "name": "BaseBdev4", 00:19:35.887 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:35.887 "is_configured": true, 00:19:35.887 "data_offset": 0, 00:19:35.887 "data_size": 65536 00:19:35.887 } 00:19:35.887 ] 00:19:35.887 }' 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.887 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.887 [2024-11-27 04:41:23.474116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:36.147 [2024-11-27 04:41:23.664089] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:19:36.147 [2024-11-27 04:41:23.664147] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.147 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.147 "name": "raid_bdev1", 00:19:36.147 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:36.147 "strip_size_kb": 0, 00:19:36.147 "state": "online", 00:19:36.147 "raid_level": "raid1", 00:19:36.147 "superblock": false, 00:19:36.147 "num_base_bdevs": 4, 00:19:36.147 "num_base_bdevs_discovered": 3, 00:19:36.147 "num_base_bdevs_operational": 3, 00:19:36.147 "process": { 00:19:36.147 "type": "rebuild", 00:19:36.147 "target": "spare", 00:19:36.147 "progress": { 00:19:36.147 "blocks": 12288, 00:19:36.147 "percent": 18 00:19:36.147 } 00:19:36.147 }, 00:19:36.147 "base_bdevs_list": [ 00:19:36.147 { 00:19:36.148 "name": "spare", 00:19:36.148 "uuid": "63bd131b-8134-516a-879b-44a296880c16", 00:19:36.148 "is_configured": true, 00:19:36.148 "data_offset": 0, 00:19:36.148 "data_size": 65536 00:19:36.148 }, 00:19:36.148 { 00:19:36.148 "name": null, 00:19:36.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.148 "is_configured": false, 00:19:36.148 "data_offset": 0, 00:19:36.148 "data_size": 65536 00:19:36.148 }, 00:19:36.148 { 00:19:36.148 "name": "BaseBdev3", 00:19:36.148 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:36.148 "is_configured": true, 00:19:36.148 "data_offset": 0, 00:19:36.148 "data_size": 65536 00:19:36.148 }, 00:19:36.148 { 00:19:36.148 "name": "BaseBdev4", 00:19:36.148 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:36.148 "is_configured": true, 00:19:36.148 "data_offset": 0, 00:19:36.148 "data_size": 65536 00:19:36.148 } 00:19:36.148 ] 00:19:36.148 }' 00:19:36.148 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.407 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.408 [2024-11-27 04:41:23.798294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:36.408 108.25 IOPS, 324.75 MiB/s [2024-11-27T04:41:24.031Z] 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=524 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.408 "name": "raid_bdev1", 00:19:36.408 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:36.408 "strip_size_kb": 0, 00:19:36.408 "state": "online", 00:19:36.408 "raid_level": "raid1", 00:19:36.408 "superblock": false, 00:19:36.408 "num_base_bdevs": 4, 00:19:36.408 "num_base_bdevs_discovered": 3, 00:19:36.408 "num_base_bdevs_operational": 3, 00:19:36.408 "process": { 00:19:36.408 "type": "rebuild", 00:19:36.408 "target": "spare", 00:19:36.408 "progress": { 00:19:36.408 "blocks": 14336, 00:19:36.408 "percent": 21 00:19:36.408 } 00:19:36.408 }, 00:19:36.408 "base_bdevs_list": [ 00:19:36.408 { 00:19:36.408 "name": "spare", 00:19:36.408 "uuid": "63bd131b-8134-516a-879b-44a296880c16", 00:19:36.408 "is_configured": true, 00:19:36.408 "data_offset": 0, 00:19:36.408 "data_size": 65536 00:19:36.408 }, 00:19:36.408 { 00:19:36.408 "name": null, 00:19:36.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.408 "is_configured": false, 00:19:36.408 "data_offset": 0, 00:19:36.408 "data_size": 65536 00:19:36.408 }, 00:19:36.408 { 00:19:36.408 "name": "BaseBdev3", 00:19:36.408 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:36.408 "is_configured": true, 00:19:36.408 "data_offset": 0, 00:19:36.408 "data_size": 65536 00:19:36.408 }, 00:19:36.408 { 00:19:36.408 "name": "BaseBdev4", 00:19:36.408 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:36.408 "is_configured": true, 00:19:36.408 "data_offset": 0, 00:19:36.408 "data_size": 65536 00:19:36.408 } 00:19:36.408 ] 00:19:36.408 }' 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.408 04:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:36.667 [2024-11-27 04:41:24.274983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:37.233 [2024-11-27 04:41:24.708547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:37.492 102.60 IOPS, 307.80 MiB/s [2024-11-27T04:41:25.115Z] 04:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.492 04:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.492 04:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.492 04:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.492 04:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.492 04:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.492 04:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.492 04:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.492 04:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.492 04:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.492 04:41:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.492 04:41:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.492 "name": "raid_bdev1", 00:19:37.492 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:37.492 "strip_size_kb": 0, 00:19:37.492 "state": "online", 00:19:37.492 "raid_level": "raid1", 00:19:37.492 "superblock": false, 00:19:37.492 "num_base_bdevs": 4, 00:19:37.492 "num_base_bdevs_discovered": 3, 00:19:37.492 "num_base_bdevs_operational": 3, 00:19:37.492 "process": { 00:19:37.492 "type": "rebuild", 00:19:37.492 "target": "spare", 00:19:37.492 "progress": { 00:19:37.492 "blocks": 30720, 00:19:37.492 "percent": 46 00:19:37.492 } 00:19:37.492 }, 00:19:37.492 "base_bdevs_list": [ 00:19:37.492 { 00:19:37.492 "name": "spare", 00:19:37.492 "uuid": "63bd131b-8134-516a-879b-44a296880c16", 00:19:37.492 "is_configured": true, 00:19:37.492 "data_offset": 0, 00:19:37.492 "data_size": 65536 00:19:37.492 }, 00:19:37.492 { 00:19:37.492 "name": null, 00:19:37.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.492 "is_configured": false, 00:19:37.492 "data_offset": 0, 00:19:37.492 "data_size": 65536 00:19:37.492 }, 00:19:37.492 { 00:19:37.492 "name": "BaseBdev3", 00:19:37.492 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:37.492 "is_configured": true, 00:19:37.492 "data_offset": 0, 00:19:37.492 "data_size": 65536 00:19:37.492 }, 00:19:37.492 { 00:19:37.492 "name": "BaseBdev4", 00:19:37.492 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:37.492 "is_configured": true, 00:19:37.492 "data_offset": 0, 00:19:37.492 "data_size": 65536 00:19:37.492 } 00:19:37.492 ] 00:19:37.492 }' 00:19:37.492 04:41:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.492 [2024-11-27 04:41:25.046376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:37.492 04:41:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.492 04:41:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.749 04:41:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.749 04:41:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:38.007 [2024-11-27 04:41:25.474101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:19:38.828 93.50 IOPS, 280.50 MiB/s [2024-11-27T04:41:26.451Z] 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.828 [2024-11-27 04:41:26.149162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.828 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.828 "name": "raid_bdev1", 00:19:38.829 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:38.829 "strip_size_kb": 0, 00:19:38.829 "state": "online", 00:19:38.829 "raid_level": "raid1", 00:19:38.829 "superblock": false, 00:19:38.829 "num_base_bdevs": 4, 00:19:38.829 "num_base_bdevs_discovered": 3, 00:19:38.829 "num_base_bdevs_operational": 3, 00:19:38.829 "process": { 00:19:38.829 "type": "rebuild", 00:19:38.829 "target": "spare", 00:19:38.829 "progress": { 00:19:38.829 "blocks": 51200, 00:19:38.829 "percent": 78 00:19:38.829 } 00:19:38.829 }, 00:19:38.829 "base_bdevs_list": [ 00:19:38.829 { 00:19:38.829 "name": "spare", 00:19:38.829 "uuid": "63bd131b-8134-516a-879b-44a296880c16", 00:19:38.829 "is_configured": true, 00:19:38.829 "data_offset": 0, 00:19:38.829 "data_size": 65536 00:19:38.829 }, 00:19:38.829 { 00:19:38.829 "name": null, 00:19:38.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.829 "is_configured": false, 00:19:38.829 "data_offset": 0, 00:19:38.829 "data_size": 65536 00:19:38.829 }, 00:19:38.829 { 00:19:38.829 "name": "BaseBdev3", 00:19:38.829 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:38.829 "is_configured": true, 00:19:38.829 "data_offset": 0, 00:19:38.829 "data_size": 65536 00:19:38.829 }, 00:19:38.829 { 00:19:38.829 "name": "BaseBdev4", 00:19:38.829 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:38.829 "is_configured": true, 00:19:38.829 "data_offset": 0, 00:19:38.829 "data_size": 65536 00:19:38.829 } 00:19:38.829 ] 00:19:38.829 }' 00:19:38.829 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.829 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.829 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.829 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.829 04:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:39.395 [2024-11-27 04:41:26.716183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:39.653 85.57 IOPS, 256.71 MiB/s [2024-11-27T04:41:27.276Z] [2024-11-27 04:41:27.059268] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:39.653 [2024-11-27 04:41:27.167276] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:39.653 [2024-11-27 04:41:27.170916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.911 "name": "raid_bdev1", 00:19:39.911 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:39.911 "strip_size_kb": 0, 00:19:39.911 "state": "online", 00:19:39.911 "raid_level": "raid1", 00:19:39.911 "superblock": false, 00:19:39.911 "num_base_bdevs": 4, 00:19:39.911 "num_base_bdevs_discovered": 3, 00:19:39.911 "num_base_bdevs_operational": 3, 00:19:39.911 "base_bdevs_list": [ 00:19:39.911 { 00:19:39.911 "name": "spare", 00:19:39.911 "uuid": "63bd131b-8134-516a-879b-44a296880c16", 00:19:39.911 "is_configured": true, 00:19:39.911 "data_offset": 0, 00:19:39.911 "data_size": 65536 00:19:39.911 }, 00:19:39.911 { 00:19:39.911 "name": null, 00:19:39.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.911 "is_configured": false, 00:19:39.911 "data_offset": 0, 00:19:39.911 "data_size": 65536 00:19:39.911 }, 00:19:39.911 { 00:19:39.911 "name": "BaseBdev3", 00:19:39.911 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:39.911 "is_configured": true, 00:19:39.911 "data_offset": 0, 00:19:39.911 "data_size": 65536 00:19:39.911 }, 00:19:39.911 { 00:19:39.911 "name": "BaseBdev4", 00:19:39.911 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:39.911 "is_configured": true, 00:19:39.911 "data_offset": 0, 00:19:39.911 "data_size": 65536 00:19:39.911 } 00:19:39.911 ] 00:19:39.911 }' 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.911 04:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.169 "name": "raid_bdev1", 00:19:40.169 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:40.169 "strip_size_kb": 0, 00:19:40.169 "state": "online", 00:19:40.169 "raid_level": "raid1", 00:19:40.169 "superblock": false, 00:19:40.169 "num_base_bdevs": 4, 00:19:40.169 "num_base_bdevs_discovered": 3, 00:19:40.169 "num_base_bdevs_operational": 3, 00:19:40.169 "base_bdevs_list": [ 00:19:40.169 { 00:19:40.169 "name": "spare", 00:19:40.169 "uuid": "63bd131b-8134-516a-879b-44a296880c16", 00:19:40.169 "is_configured": true, 00:19:40.169 "data_offset": 0, 00:19:40.169 "data_size": 65536 00:19:40.169 }, 00:19:40.169 { 00:19:40.169 "name": null, 00:19:40.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.169 "is_configured": false, 00:19:40.169 "data_offset": 0, 00:19:40.169 "data_size": 65536 00:19:40.169 }, 00:19:40.169 { 00:19:40.169 "name": "BaseBdev3", 00:19:40.169 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:40.169 "is_configured": true, 00:19:40.169 "data_offset": 0, 00:19:40.169 "data_size": 65536 00:19:40.169 }, 00:19:40.169 { 00:19:40.169 "name": "BaseBdev4", 00:19:40.169 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:40.169 "is_configured": true, 00:19:40.169 "data_offset": 0, 00:19:40.169 "data_size": 65536 00:19:40.169 } 00:19:40.169 ] 00:19:40.169 }' 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.169 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.170 04:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.170 04:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.170 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.170 04:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.170 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.170 "name": "raid_bdev1", 00:19:40.170 "uuid": "1fb474b4-d8d2-465f-aa97-ec8569be7b90", 00:19:40.170 "strip_size_kb": 0, 00:19:40.170 "state": "online", 00:19:40.170 "raid_level": "raid1", 00:19:40.170 "superblock": false, 00:19:40.170 "num_base_bdevs": 4, 00:19:40.170 "num_base_bdevs_discovered": 3, 00:19:40.170 "num_base_bdevs_operational": 3, 00:19:40.170 "base_bdevs_list": [ 00:19:40.170 { 00:19:40.170 "name": "spare", 00:19:40.170 "uuid": "63bd131b-8134-516a-879b-44a296880c16", 00:19:40.170 "is_configured": true, 00:19:40.170 "data_offset": 0, 00:19:40.170 "data_size": 65536 00:19:40.170 }, 00:19:40.170 { 00:19:40.170 "name": null, 00:19:40.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.170 "is_configured": false, 00:19:40.170 "data_offset": 0, 00:19:40.170 "data_size": 65536 00:19:40.170 }, 00:19:40.170 { 00:19:40.170 "name": "BaseBdev3", 00:19:40.170 "uuid": "4ba83153-02be-5fd0-b23d-b752594013aa", 00:19:40.170 "is_configured": true, 00:19:40.170 "data_offset": 0, 00:19:40.170 "data_size": 65536 00:19:40.170 }, 00:19:40.170 { 00:19:40.170 "name": "BaseBdev4", 00:19:40.170 "uuid": "11c4d929-0c59-5a88-a8c6-2043d6c99cd5", 00:19:40.170 "is_configured": true, 00:19:40.170 "data_offset": 0, 00:19:40.170 "data_size": 65536 00:19:40.170 } 00:19:40.170 ] 00:19:40.170 }' 00:19:40.170 04:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.170 04:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.685 79.12 IOPS, 237.38 MiB/s [2024-11-27T04:41:28.308Z] 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:40.685 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.685 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.685 [2024-11-27 04:41:28.166252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.685 [2024-11-27 04:41:28.166297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.685 00:19:40.685 Latency(us) 00:19:40.685 [2024-11-27T04:41:28.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.685 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:40.685 raid_bdev1 : 8.48 76.14 228.43 0.00 0.00 18203.43 318.37 113913.48 00:19:40.685 [2024-11-27T04:41:28.308Z] =================================================================================================================== 00:19:40.685 [2024-11-27T04:41:28.308Z] Total : 76.14 228.43 0.00 0.00 18203.43 318.37 113913.48 00:19:40.685 [2024-11-27 04:41:28.286314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.685 [2024-11-27 04:41:28.286435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.685 [2024-11-27 04:41:28.286579] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.685 [2024-11-27 04:41:28.286601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:40.685 { 00:19:40.685 "results": [ 00:19:40.685 { 00:19:40.685 "job": "raid_bdev1", 00:19:40.685 "core_mask": "0x1", 00:19:40.685 "workload": "randrw", 00:19:40.685 "percentage": 50, 00:19:40.685 "status": "finished", 00:19:40.685 "queue_depth": 2, 00:19:40.685 "io_size": 3145728, 00:19:40.685 "runtime": 8.483986, 00:19:40.685 "iops": 76.14345426784061, 00:19:40.685 "mibps": 228.43036280352183, 00:19:40.685 "io_failed": 0, 00:19:40.685 "io_timeout": 0, 00:19:40.685 "avg_latency_us": 18203.425792288206, 00:19:40.685 "min_latency_us": 318.3709090909091, 00:19:40.685 "max_latency_us": 113913.48363636364 00:19:40.685 } 00:19:40.685 ], 00:19:40.685 "core_count": 1 00:19:40.685 } 00:19:40.685 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.685 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.685 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.685 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.685 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:40.685 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.943 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:41.201 /dev/nbd0 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.201 1+0 records in 00:19:41.201 1+0 records out 00:19:41.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355998 s, 11.5 MB/s 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.201 04:41:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:41.460 /dev/nbd1 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.460 1+0 records in 00:19:41.460 1+0 records out 00:19:41.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309484 s, 13.2 MB/s 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:41.460 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.722 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.983 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:42.242 /dev/nbd1 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.500 1+0 records in 00:19:42.500 1+0 records out 00:19:42.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305636 s, 13.4 MB/s 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:42.500 04:41:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:42.846 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79218 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79218 ']' 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79218 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79218 00:19:43.125 killing process with pid 79218 00:19:43.125 Received shutdown signal, test time was about 10.796205 seconds 00:19:43.125 00:19:43.125 Latency(us) 00:19:43.125 [2024-11-27T04:41:30.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.125 [2024-11-27T04:41:30.748Z] =================================================================================================================== 00:19:43.125 [2024-11-27T04:41:30.748Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79218' 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79218 00:19:43.125 04:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79218 00:19:43.125 [2024-11-27 04:41:30.578423] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:43.384 [2024-11-27 04:41:30.961136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:44.759 00:19:44.759 real 0m14.555s 00:19:44.759 user 0m19.114s 00:19:44.759 sys 0m1.841s 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.759 ************************************ 00:19:44.759 END TEST raid_rebuild_test_io 00:19:44.759 ************************************ 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.759 04:41:32 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:19:44.759 04:41:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:44.759 04:41:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.759 04:41:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:44.759 ************************************ 00:19:44.759 START TEST raid_rebuild_test_sb_io 00:19:44.759 ************************************ 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79634 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79634 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79634 ']' 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.759 04:41:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.759 [2024-11-27 04:41:32.317806] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:44.759 [2024-11-27 04:41:32.318157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79634 ] 00:19:44.759 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:44.759 Zero copy mechanism will not be used. 00:19:45.018 [2024-11-27 04:41:32.500842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.018 [2024-11-27 04:41:32.638840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.276 [2024-11-27 04:41:32.856347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.276 [2024-11-27 04:41:32.856644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.842 BaseBdev1_malloc 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.842 [2024-11-27 04:41:33.406645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:45.842 [2024-11-27 04:41:33.406825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.842 [2024-11-27 04:41:33.406902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:45.842 [2024-11-27 04:41:33.406942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.842 [2024-11-27 04:41:33.411084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.842 [2024-11-27 04:41:33.411160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:45.842 BaseBdev1 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.842 BaseBdev2_malloc 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.842 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.102 [2024-11-27 04:41:33.466422] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:46.102 [2024-11-27 04:41:33.466536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.102 [2024-11-27 04:41:33.466578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:46.102 [2024-11-27 04:41:33.466605] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.102 [2024-11-27 04:41:33.469689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.102 [2024-11-27 04:41:33.470868] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:46.102 BaseBdev2 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.102 BaseBdev3_malloc 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.102 [2024-11-27 04:41:33.550411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:46.102 [2024-11-27 04:41:33.550497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.102 [2024-11-27 04:41:33.550532] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:46.102 [2024-11-27 04:41:33.550550] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.102 [2024-11-27 04:41:33.553520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.102 [2024-11-27 04:41:33.553595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:46.102 BaseBdev3 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.102 BaseBdev4_malloc 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.102 [2024-11-27 04:41:33.608123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:46.102 [2024-11-27 04:41:33.608381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.102 [2024-11-27 04:41:33.608426] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:46.102 [2024-11-27 04:41:33.608447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.102 [2024-11-27 04:41:33.611374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.102 [2024-11-27 04:41:33.611547] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:46.102 BaseBdev4 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.102 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.103 spare_malloc 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.103 spare_delay 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.103 [2024-11-27 04:41:33.678487] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:46.103 [2024-11-27 04:41:33.678738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.103 [2024-11-27 04:41:33.678802] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:46.103 [2024-11-27 04:41:33.678831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.103 [2024-11-27 04:41:33.681874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.103 [2024-11-27 04:41:33.681930] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:46.103 spare 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.103 [2024-11-27 04:41:33.686645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.103 [2024-11-27 04:41:33.689253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.103 [2024-11-27 04:41:33.689481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.103 [2024-11-27 04:41:33.689691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:46.103 [2024-11-27 04:41:33.690106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:46.103 [2024-11-27 04:41:33.690253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:46.103 [2024-11-27 04:41:33.690701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:46.103 [2024-11-27 04:41:33.691083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:46.103 [2024-11-27 04:41:33.691212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:46.103 [2024-11-27 04:41:33.691619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.103 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.361 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.361 "name": "raid_bdev1", 00:19:46.361 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:46.361 "strip_size_kb": 0, 00:19:46.361 "state": "online", 00:19:46.361 "raid_level": "raid1", 00:19:46.361 "superblock": true, 00:19:46.361 "num_base_bdevs": 4, 00:19:46.361 "num_base_bdevs_discovered": 4, 00:19:46.362 "num_base_bdevs_operational": 4, 00:19:46.362 "base_bdevs_list": [ 00:19:46.362 { 00:19:46.362 "name": "BaseBdev1", 00:19:46.362 "uuid": "14ec66b0-2ed3-52e2-a537-a9db51a68aaf", 00:19:46.362 "is_configured": true, 00:19:46.362 "data_offset": 2048, 00:19:46.362 "data_size": 63488 00:19:46.362 }, 00:19:46.362 { 00:19:46.362 "name": "BaseBdev2", 00:19:46.362 "uuid": "0727ce7f-8bb1-554d-94da-977ac9086372", 00:19:46.362 "is_configured": true, 00:19:46.362 "data_offset": 2048, 00:19:46.362 "data_size": 63488 00:19:46.362 }, 00:19:46.362 { 00:19:46.362 "name": "BaseBdev3", 00:19:46.362 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:46.362 "is_configured": true, 00:19:46.362 "data_offset": 2048, 00:19:46.362 "data_size": 63488 00:19:46.362 }, 00:19:46.362 { 00:19:46.362 "name": "BaseBdev4", 00:19:46.362 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:46.362 "is_configured": true, 00:19:46.362 "data_offset": 2048, 00:19:46.362 "data_size": 63488 00:19:46.362 } 00:19:46.362 ] 00:19:46.362 }' 00:19:46.362 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.362 04:41:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.620 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:46.620 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:46.620 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.620 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.620 [2024-11-27 04:41:34.196124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:46.620 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.620 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:46.620 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:46.620 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.620 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.620 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.880 [2024-11-27 04:41:34.307668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.880 "name": "raid_bdev1", 00:19:46.880 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:46.880 "strip_size_kb": 0, 00:19:46.880 "state": "online", 00:19:46.880 "raid_level": "raid1", 00:19:46.880 "superblock": true, 00:19:46.880 "num_base_bdevs": 4, 00:19:46.880 "num_base_bdevs_discovered": 3, 00:19:46.880 "num_base_bdevs_operational": 3, 00:19:46.880 "base_bdevs_list": [ 00:19:46.880 { 00:19:46.880 "name": null, 00:19:46.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.880 "is_configured": false, 00:19:46.880 "data_offset": 0, 00:19:46.880 "data_size": 63488 00:19:46.880 }, 00:19:46.880 { 00:19:46.880 "name": "BaseBdev2", 00:19:46.880 "uuid": "0727ce7f-8bb1-554d-94da-977ac9086372", 00:19:46.880 "is_configured": true, 00:19:46.880 "data_offset": 2048, 00:19:46.880 "data_size": 63488 00:19:46.880 }, 00:19:46.880 { 00:19:46.880 "name": "BaseBdev3", 00:19:46.880 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:46.880 "is_configured": true, 00:19:46.880 "data_offset": 2048, 00:19:46.880 "data_size": 63488 00:19:46.880 }, 00:19:46.880 { 00:19:46.880 "name": "BaseBdev4", 00:19:46.880 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:46.880 "is_configured": true, 00:19:46.880 "data_offset": 2048, 00:19:46.880 "data_size": 63488 00:19:46.880 } 00:19:46.880 ] 00:19:46.880 }' 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.880 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.880 [2024-11-27 04:41:34.436337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:46.880 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:46.880 Zero copy mechanism will not be used. 00:19:46.880 Running I/O for 60 seconds... 00:19:47.448 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:47.448 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.448 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.448 [2024-11-27 04:41:34.820778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.448 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.448 04:41:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:47.448 [2024-11-27 04:41:34.870509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:47.448 [2024-11-27 04:41:34.873376] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.448 [2024-11-27 04:41:35.000157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:47.707 [2024-11-27 04:41:35.259928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:47.707 [2024-11-27 04:41:35.261115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:48.224 149.00 IOPS, 447.00 MiB/s [2024-11-27T04:41:35.847Z] [2024-11-27 04:41:35.608585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:48.224 [2024-11-27 04:41:35.710626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:48.224 [2024-11-27 04:41:35.710991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.482 "name": "raid_bdev1", 00:19:48.482 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:48.482 "strip_size_kb": 0, 00:19:48.482 "state": "online", 00:19:48.482 "raid_level": "raid1", 00:19:48.482 "superblock": true, 00:19:48.482 "num_base_bdevs": 4, 00:19:48.482 "num_base_bdevs_discovered": 4, 00:19:48.482 "num_base_bdevs_operational": 4, 00:19:48.482 "process": { 00:19:48.482 "type": "rebuild", 00:19:48.482 "target": "spare", 00:19:48.482 "progress": { 00:19:48.482 "blocks": 12288, 00:19:48.482 "percent": 19 00:19:48.482 } 00:19:48.482 }, 00:19:48.482 "base_bdevs_list": [ 00:19:48.482 { 00:19:48.482 "name": "spare", 00:19:48.482 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:48.482 "is_configured": true, 00:19:48.482 "data_offset": 2048, 00:19:48.482 "data_size": 63488 00:19:48.482 }, 00:19:48.482 { 00:19:48.482 "name": "BaseBdev2", 00:19:48.482 "uuid": "0727ce7f-8bb1-554d-94da-977ac9086372", 00:19:48.482 "is_configured": true, 00:19:48.482 "data_offset": 2048, 00:19:48.482 "data_size": 63488 00:19:48.482 }, 00:19:48.482 { 00:19:48.482 "name": "BaseBdev3", 00:19:48.482 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:48.482 "is_configured": true, 00:19:48.482 "data_offset": 2048, 00:19:48.482 "data_size": 63488 00:19:48.482 }, 00:19:48.482 { 00:19:48.482 "name": "BaseBdev4", 00:19:48.482 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:48.482 "is_configured": true, 00:19:48.482 "data_offset": 2048, 00:19:48.482 "data_size": 63488 00:19:48.482 } 00:19:48.482 ] 00:19:48.482 }' 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.482 04:41:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.482 [2024-11-27 04:41:35.996079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:48.482 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.482 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:48.482 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.482 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.482 [2024-11-27 04:41:36.035823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.741 [2024-11-27 04:41:36.116657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:48.741 [2024-11-27 04:41:36.220133] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:48.741 [2024-11-27 04:41:36.224873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.741 [2024-11-27 04:41:36.224928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.741 [2024-11-27 04:41:36.224950] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:48.741 [2024-11-27 04:41:36.241059] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.741 "name": "raid_bdev1", 00:19:48.741 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:48.741 "strip_size_kb": 0, 00:19:48.741 "state": "online", 00:19:48.741 "raid_level": "raid1", 00:19:48.741 "superblock": true, 00:19:48.741 "num_base_bdevs": 4, 00:19:48.741 "num_base_bdevs_discovered": 3, 00:19:48.741 "num_base_bdevs_operational": 3, 00:19:48.741 "base_bdevs_list": [ 00:19:48.741 { 00:19:48.741 "name": null, 00:19:48.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.741 "is_configured": false, 00:19:48.741 "data_offset": 0, 00:19:48.741 "data_size": 63488 00:19:48.741 }, 00:19:48.741 { 00:19:48.741 "name": "BaseBdev2", 00:19:48.741 "uuid": "0727ce7f-8bb1-554d-94da-977ac9086372", 00:19:48.741 "is_configured": true, 00:19:48.741 "data_offset": 2048, 00:19:48.741 "data_size": 63488 00:19:48.741 }, 00:19:48.741 { 00:19:48.741 "name": "BaseBdev3", 00:19:48.741 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:48.741 "is_configured": true, 00:19:48.741 "data_offset": 2048, 00:19:48.741 "data_size": 63488 00:19:48.741 }, 00:19:48.741 { 00:19:48.741 "name": "BaseBdev4", 00:19:48.741 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:48.741 "is_configured": true, 00:19:48.741 "data_offset": 2048, 00:19:48.741 "data_size": 63488 00:19:48.741 } 00:19:48.741 ] 00:19:48.741 }' 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.741 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.257 114.50 IOPS, 343.50 MiB/s [2024-11-27T04:41:36.880Z] 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.257 "name": "raid_bdev1", 00:19:49.257 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:49.257 "strip_size_kb": 0, 00:19:49.257 "state": "online", 00:19:49.257 "raid_level": "raid1", 00:19:49.257 "superblock": true, 00:19:49.257 "num_base_bdevs": 4, 00:19:49.257 "num_base_bdevs_discovered": 3, 00:19:49.257 "num_base_bdevs_operational": 3, 00:19:49.257 "base_bdevs_list": [ 00:19:49.257 { 00:19:49.257 "name": null, 00:19:49.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.257 "is_configured": false, 00:19:49.257 "data_offset": 0, 00:19:49.257 "data_size": 63488 00:19:49.257 }, 00:19:49.257 { 00:19:49.257 "name": "BaseBdev2", 00:19:49.257 "uuid": "0727ce7f-8bb1-554d-94da-977ac9086372", 00:19:49.257 "is_configured": true, 00:19:49.257 "data_offset": 2048, 00:19:49.257 "data_size": 63488 00:19:49.257 }, 00:19:49.257 { 00:19:49.257 "name": "BaseBdev3", 00:19:49.257 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:49.257 "is_configured": true, 00:19:49.257 "data_offset": 2048, 00:19:49.257 "data_size": 63488 00:19:49.257 }, 00:19:49.257 { 00:19:49.257 "name": "BaseBdev4", 00:19:49.257 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:49.257 "is_configured": true, 00:19:49.257 "data_offset": 2048, 00:19:49.257 "data_size": 63488 00:19:49.257 } 00:19:49.257 ] 00:19:49.257 }' 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:49.257 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.515 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:49.515 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:49.515 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.515 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.515 [2024-11-27 04:41:36.920420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.515 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.515 04:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:49.515 [2024-11-27 04:41:37.024252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:49.515 [2024-11-27 04:41:37.027086] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.774 [2024-11-27 04:41:37.140423] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:49.774 [2024-11-27 04:41:37.141218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:49.774 [2024-11-27 04:41:37.289632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:49.774 [2024-11-27 04:41:37.290836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:50.290 120.00 IOPS, 360.00 MiB/s [2024-11-27T04:41:37.913Z] [2024-11-27 04:41:37.808146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:50.290 [2024-11-27 04:41:37.809107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:50.548 04:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.548 04:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.548 04:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.548 04:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.548 04:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.549 04:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.549 04:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.549 04:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.549 04:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.549 "name": "raid_bdev1", 00:19:50.549 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:50.549 "strip_size_kb": 0, 00:19:50.549 "state": "online", 00:19:50.549 "raid_level": "raid1", 00:19:50.549 "superblock": true, 00:19:50.549 "num_base_bdevs": 4, 00:19:50.549 "num_base_bdevs_discovered": 4, 00:19:50.549 "num_base_bdevs_operational": 4, 00:19:50.549 "process": { 00:19:50.549 "type": "rebuild", 00:19:50.549 "target": "spare", 00:19:50.549 "progress": { 00:19:50.549 "blocks": 10240, 00:19:50.549 "percent": 16 00:19:50.549 } 00:19:50.549 }, 00:19:50.549 "base_bdevs_list": [ 00:19:50.549 { 00:19:50.549 "name": "spare", 00:19:50.549 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:50.549 "is_configured": true, 00:19:50.549 "data_offset": 2048, 00:19:50.549 "data_size": 63488 00:19:50.549 }, 00:19:50.549 { 00:19:50.549 "name": "BaseBdev2", 00:19:50.549 "uuid": "0727ce7f-8bb1-554d-94da-977ac9086372", 00:19:50.549 "is_configured": true, 00:19:50.549 "data_offset": 2048, 00:19:50.549 "data_size": 63488 00:19:50.549 }, 00:19:50.549 { 00:19:50.549 "name": "BaseBdev3", 00:19:50.549 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:50.549 "is_configured": true, 00:19:50.549 "data_offset": 2048, 00:19:50.549 "data_size": 63488 00:19:50.549 }, 00:19:50.549 { 00:19:50.549 "name": "BaseBdev4", 00:19:50.549 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:50.549 "is_configured": true, 00:19:50.549 "data_offset": 2048, 00:19:50.549 "data_size": 63488 00:19:50.549 } 00:19:50.549 ] 00:19:50.549 }' 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:50.549 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.549 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.549 [2024-11-27 04:41:38.150517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:50.549 [2024-11-27 04:41:38.161087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:50.807 [2024-11-27 04:41:38.272134] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:19:50.807 [2024-11-27 04:41:38.272705] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:19:50.807 [2024-11-27 04:41:38.272850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.807 "name": "raid_bdev1", 00:19:50.807 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:50.807 "strip_size_kb": 0, 00:19:50.807 "state": "online", 00:19:50.807 "raid_level": "raid1", 00:19:50.807 "superblock": true, 00:19:50.807 "num_base_bdevs": 4, 00:19:50.807 "num_base_bdevs_discovered": 3, 00:19:50.807 "num_base_bdevs_operational": 3, 00:19:50.807 "process": { 00:19:50.807 "type": "rebuild", 00:19:50.807 "target": "spare", 00:19:50.807 "progress": { 00:19:50.807 "blocks": 14336, 00:19:50.807 "percent": 22 00:19:50.807 } 00:19:50.807 }, 00:19:50.807 "base_bdevs_list": [ 00:19:50.807 { 00:19:50.807 "name": "spare", 00:19:50.807 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:50.807 "is_configured": true, 00:19:50.807 "data_offset": 2048, 00:19:50.807 "data_size": 63488 00:19:50.807 }, 00:19:50.807 { 00:19:50.807 "name": null, 00:19:50.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.807 "is_configured": false, 00:19:50.807 "data_offset": 0, 00:19:50.807 "data_size": 63488 00:19:50.807 }, 00:19:50.807 { 00:19:50.807 "name": "BaseBdev3", 00:19:50.807 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:50.807 "is_configured": true, 00:19:50.807 "data_offset": 2048, 00:19:50.807 "data_size": 63488 00:19:50.807 }, 00:19:50.807 { 00:19:50.807 "name": "BaseBdev4", 00:19:50.807 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:50.807 "is_configured": true, 00:19:50.807 "data_offset": 2048, 00:19:50.807 "data_size": 63488 00:19:50.807 } 00:19:50.807 ] 00:19:50.807 }' 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.807 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.807 [2024-11-27 04:41:38.411717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=539 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.068 113.50 IOPS, 340.50 MiB/s [2024-11-27T04:41:38.691Z] 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.068 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.068 "name": "raid_bdev1", 00:19:51.068 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:51.068 "strip_size_kb": 0, 00:19:51.068 "state": "online", 00:19:51.068 "raid_level": "raid1", 00:19:51.068 "superblock": true, 00:19:51.068 "num_base_bdevs": 4, 00:19:51.068 "num_base_bdevs_discovered": 3, 00:19:51.068 "num_base_bdevs_operational": 3, 00:19:51.068 "process": { 00:19:51.068 "type": "rebuild", 00:19:51.068 "target": "spare", 00:19:51.068 "progress": { 00:19:51.068 "blocks": 16384, 00:19:51.068 "percent": 25 00:19:51.068 } 00:19:51.068 }, 00:19:51.068 "base_bdevs_list": [ 00:19:51.068 { 00:19:51.068 "name": "spare", 00:19:51.068 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:51.068 "is_configured": true, 00:19:51.068 "data_offset": 2048, 00:19:51.068 "data_size": 63488 00:19:51.068 }, 00:19:51.068 { 00:19:51.068 "name": null, 00:19:51.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.068 "is_configured": false, 00:19:51.068 "data_offset": 0, 00:19:51.069 "data_size": 63488 00:19:51.069 }, 00:19:51.069 { 00:19:51.069 "name": "BaseBdev3", 00:19:51.069 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:51.069 "is_configured": true, 00:19:51.069 "data_offset": 2048, 00:19:51.069 "data_size": 63488 00:19:51.069 }, 00:19:51.069 { 00:19:51.069 "name": "BaseBdev4", 00:19:51.069 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:51.069 "is_configured": true, 00:19:51.069 "data_offset": 2048, 00:19:51.069 "data_size": 63488 00:19:51.069 } 00:19:51.069 ] 00:19:51.069 }' 00:19:51.069 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.069 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.069 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.069 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.069 04:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:51.334 [2024-11-27 04:41:38.763754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:51.900 [2024-11-27 04:41:39.230620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:51.900 [2024-11-27 04:41:39.231344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:51.900 [2024-11-27 04:41:39.343470] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:52.157 101.80 IOPS, 305.40 MiB/s [2024-11-27T04:41:39.781Z] 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.158 "name": "raid_bdev1", 00:19:52.158 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:52.158 "strip_size_kb": 0, 00:19:52.158 "state": "online", 00:19:52.158 "raid_level": "raid1", 00:19:52.158 "superblock": true, 00:19:52.158 "num_base_bdevs": 4, 00:19:52.158 "num_base_bdevs_discovered": 3, 00:19:52.158 "num_base_bdevs_operational": 3, 00:19:52.158 "process": { 00:19:52.158 "type": "rebuild", 00:19:52.158 "target": "spare", 00:19:52.158 "progress": { 00:19:52.158 "blocks": 30720, 00:19:52.158 "percent": 48 00:19:52.158 } 00:19:52.158 }, 00:19:52.158 "base_bdevs_list": [ 00:19:52.158 { 00:19:52.158 "name": "spare", 00:19:52.158 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:52.158 "is_configured": true, 00:19:52.158 "data_offset": 2048, 00:19:52.158 "data_size": 63488 00:19:52.158 }, 00:19:52.158 { 00:19:52.158 "name": null, 00:19:52.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.158 "is_configured": false, 00:19:52.158 "data_offset": 0, 00:19:52.158 "data_size": 63488 00:19:52.158 }, 00:19:52.158 { 00:19:52.158 "name": "BaseBdev3", 00:19:52.158 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:52.158 "is_configured": true, 00:19:52.158 "data_offset": 2048, 00:19:52.158 "data_size": 63488 00:19:52.158 }, 00:19:52.158 { 00:19:52.158 "name": "BaseBdev4", 00:19:52.158 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:52.158 "is_configured": true, 00:19:52.158 "data_offset": 2048, 00:19:52.158 "data_size": 63488 00:19:52.158 } 00:19:52.158 ] 00:19:52.158 }' 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.158 04:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:52.158 [2024-11-27 04:41:39.758794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:53.093 93.17 IOPS, 279.50 MiB/s [2024-11-27T04:41:40.716Z] [2024-11-27 04:41:40.483632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.352 "name": "raid_bdev1", 00:19:53.352 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:53.352 "strip_size_kb": 0, 00:19:53.352 "state": "online", 00:19:53.352 "raid_level": "raid1", 00:19:53.352 "superblock": true, 00:19:53.352 "num_base_bdevs": 4, 00:19:53.352 "num_base_bdevs_discovered": 3, 00:19:53.352 "num_base_bdevs_operational": 3, 00:19:53.352 "process": { 00:19:53.352 "type": "rebuild", 00:19:53.352 "target": "spare", 00:19:53.352 "progress": { 00:19:53.352 "blocks": 49152, 00:19:53.352 "percent": 77 00:19:53.352 } 00:19:53.352 }, 00:19:53.352 "base_bdevs_list": [ 00:19:53.352 { 00:19:53.352 "name": "spare", 00:19:53.352 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:53.352 "is_configured": true, 00:19:53.352 "data_offset": 2048, 00:19:53.352 "data_size": 63488 00:19:53.352 }, 00:19:53.352 { 00:19:53.352 "name": null, 00:19:53.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.352 "is_configured": false, 00:19:53.352 "data_offset": 0, 00:19:53.352 "data_size": 63488 00:19:53.352 }, 00:19:53.352 { 00:19:53.352 "name": "BaseBdev3", 00:19:53.352 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:53.352 "is_configured": true, 00:19:53.352 "data_offset": 2048, 00:19:53.352 "data_size": 63488 00:19:53.352 }, 00:19:53.352 { 00:19:53.352 "name": "BaseBdev4", 00:19:53.352 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:53.352 "is_configured": true, 00:19:53.352 "data_offset": 2048, 00:19:53.352 "data_size": 63488 00:19:53.352 } 00:19:53.352 ] 00:19:53.352 }' 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.352 04:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:53.919 [2024-11-27 04:41:41.271084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:54.178 84.29 IOPS, 252.86 MiB/s [2024-11-27T04:41:41.801Z] [2024-11-27 04:41:41.616405] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:54.178 [2024-11-27 04:41:41.724385] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:54.178 [2024-11-27 04:41:41.728325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.439 "name": "raid_bdev1", 00:19:54.439 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:54.439 "strip_size_kb": 0, 00:19:54.439 "state": "online", 00:19:54.439 "raid_level": "raid1", 00:19:54.439 "superblock": true, 00:19:54.439 "num_base_bdevs": 4, 00:19:54.439 "num_base_bdevs_discovered": 3, 00:19:54.439 "num_base_bdevs_operational": 3, 00:19:54.439 "base_bdevs_list": [ 00:19:54.439 { 00:19:54.439 "name": "spare", 00:19:54.439 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:54.439 "is_configured": true, 00:19:54.439 "data_offset": 2048, 00:19:54.439 "data_size": 63488 00:19:54.439 }, 00:19:54.439 { 00:19:54.439 "name": null, 00:19:54.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.439 "is_configured": false, 00:19:54.439 "data_offset": 0, 00:19:54.439 "data_size": 63488 00:19:54.439 }, 00:19:54.439 { 00:19:54.439 "name": "BaseBdev3", 00:19:54.439 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:54.439 "is_configured": true, 00:19:54.439 "data_offset": 2048, 00:19:54.439 "data_size": 63488 00:19:54.439 }, 00:19:54.439 { 00:19:54.439 "name": "BaseBdev4", 00:19:54.439 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:54.439 "is_configured": true, 00:19:54.439 "data_offset": 2048, 00:19:54.439 "data_size": 63488 00:19:54.439 } 00:19:54.439 ] 00:19:54.439 }' 00:19:54.439 04:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.439 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:54.439 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.699 "name": "raid_bdev1", 00:19:54.699 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:54.699 "strip_size_kb": 0, 00:19:54.699 "state": "online", 00:19:54.699 "raid_level": "raid1", 00:19:54.699 "superblock": true, 00:19:54.699 "num_base_bdevs": 4, 00:19:54.699 "num_base_bdevs_discovered": 3, 00:19:54.699 "num_base_bdevs_operational": 3, 00:19:54.699 "base_bdevs_list": [ 00:19:54.699 { 00:19:54.699 "name": "spare", 00:19:54.699 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:54.699 "is_configured": true, 00:19:54.699 "data_offset": 2048, 00:19:54.699 "data_size": 63488 00:19:54.699 }, 00:19:54.699 { 00:19:54.699 "name": null, 00:19:54.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.699 "is_configured": false, 00:19:54.699 "data_offset": 0, 00:19:54.699 "data_size": 63488 00:19:54.699 }, 00:19:54.699 { 00:19:54.699 "name": "BaseBdev3", 00:19:54.699 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:54.699 "is_configured": true, 00:19:54.699 "data_offset": 2048, 00:19:54.699 "data_size": 63488 00:19:54.699 }, 00:19:54.699 { 00:19:54.699 "name": "BaseBdev4", 00:19:54.699 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:54.699 "is_configured": true, 00:19:54.699 "data_offset": 2048, 00:19:54.699 "data_size": 63488 00:19:54.699 } 00:19:54.699 ] 00:19:54.699 }' 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.699 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.699 "name": "raid_bdev1", 00:19:54.699 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:54.699 "strip_size_kb": 0, 00:19:54.699 "state": "online", 00:19:54.699 "raid_level": "raid1", 00:19:54.699 "superblock": true, 00:19:54.699 "num_base_bdevs": 4, 00:19:54.699 "num_base_bdevs_discovered": 3, 00:19:54.699 "num_base_bdevs_operational": 3, 00:19:54.699 "base_bdevs_list": [ 00:19:54.699 { 00:19:54.699 "name": "spare", 00:19:54.699 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:54.699 "is_configured": true, 00:19:54.699 "data_offset": 2048, 00:19:54.699 "data_size": 63488 00:19:54.699 }, 00:19:54.699 { 00:19:54.699 "name": null, 00:19:54.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.699 "is_configured": false, 00:19:54.699 "data_offset": 0, 00:19:54.699 "data_size": 63488 00:19:54.699 }, 00:19:54.699 { 00:19:54.699 "name": "BaseBdev3", 00:19:54.699 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:54.699 "is_configured": true, 00:19:54.699 "data_offset": 2048, 00:19:54.699 "data_size": 63488 00:19:54.699 }, 00:19:54.699 { 00:19:54.699 "name": "BaseBdev4", 00:19:54.699 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:54.699 "is_configured": true, 00:19:54.699 "data_offset": 2048, 00:19:54.699 "data_size": 63488 00:19:54.699 } 00:19:54.699 ] 00:19:54.699 }' 00:19:54.700 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.700 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.217 79.12 IOPS, 237.38 MiB/s [2024-11-27T04:41:42.840Z] 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:55.217 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.217 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.217 [2024-11-27 04:41:42.744367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:55.217 [2024-11-27 04:41:42.744413] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.475 00:19:55.475 Latency(us) 00:19:55.475 [2024-11-27T04:41:43.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.475 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:55.475 raid_bdev1 : 8.40 76.69 230.08 0.00 0.00 17722.15 301.61 114866.73 00:19:55.475 [2024-11-27T04:41:43.098Z] =================================================================================================================== 00:19:55.475 [2024-11-27T04:41:43.098Z] Total : 76.69 230.08 0.00 0.00 17722.15 301.61 114866.73 00:19:55.476 [2024-11-27 04:41:42.856335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.476 [2024-11-27 04:41:42.856573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.476 [2024-11-27 04:41:42.856847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.476 [2024-11-27 04:41:42.857026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:55.476 { 00:19:55.476 "results": [ 00:19:55.476 { 00:19:55.476 "job": "raid_bdev1", 00:19:55.476 "core_mask": "0x1", 00:19:55.476 "workload": "randrw", 00:19:55.476 "percentage": 50, 00:19:55.476 "status": "finished", 00:19:55.476 "queue_depth": 2, 00:19:55.476 "io_size": 3145728, 00:19:55.476 "runtime": 8.396977, 00:19:55.476 "iops": 76.6942674726869, 00:19:55.476 "mibps": 230.0828024180607, 00:19:55.476 "io_failed": 0, 00:19:55.476 "io_timeout": 0, 00:19:55.476 "avg_latency_us": 17722.15001693958, 00:19:55.476 "min_latency_us": 301.61454545454546, 00:19:55.476 "max_latency_us": 114866.73454545454 00:19:55.476 } 00:19:55.476 ], 00:19:55.476 "core_count": 1 00:19:55.476 } 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:55.476 04:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:55.734 /dev/nbd0 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:55.734 1+0 records in 00:19:55.734 1+0 records out 00:19:55.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397628 s, 10.3 MB/s 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:55.734 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:55.735 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:55.993 /dev/nbd1 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.251 1+0 records in 00:19:56.251 1+0 records out 00:19:56.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427873 s, 9.6 MB/s 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.251 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.252 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:56.252 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:56.252 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:56.252 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:56.252 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:56.252 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:56.252 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:56.252 04:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:56.510 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:56.510 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:56.510 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:56.510 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:56.510 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:56.510 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.768 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:57.026 /dev/nbd1 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.026 1+0 records in 00:19:57.026 1+0 records out 00:19:57.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291125 s, 14.1 MB/s 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.026 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:57.284 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.542 04:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.800 [2024-11-27 04:41:45.191990] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:57.800 [2024-11-27 04:41:45.192061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.800 [2024-11-27 04:41:45.192090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:57.800 [2024-11-27 04:41:45.192108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.800 [2024-11-27 04:41:45.195115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.800 [2024-11-27 04:41:45.195167] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:57.800 [2024-11-27 04:41:45.195280] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:57.800 [2024-11-27 04:41:45.195350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.800 [2024-11-27 04:41:45.195525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:57.800 [2024-11-27 04:41:45.195677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:57.800 spare 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.800 [2024-11-27 04:41:45.295840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:57.800 [2024-11-27 04:41:45.295905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:57.800 [2024-11-27 04:41:45.296351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:19:57.800 [2024-11-27 04:41:45.296655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:57.800 [2024-11-27 04:41:45.296684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:57.800 [2024-11-27 04:41:45.297192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.800 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.801 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.801 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.801 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.801 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.801 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.801 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.801 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.801 "name": "raid_bdev1", 00:19:57.801 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:57.801 "strip_size_kb": 0, 00:19:57.801 "state": "online", 00:19:57.801 "raid_level": "raid1", 00:19:57.801 "superblock": true, 00:19:57.801 "num_base_bdevs": 4, 00:19:57.801 "num_base_bdevs_discovered": 3, 00:19:57.801 "num_base_bdevs_operational": 3, 00:19:57.801 "base_bdevs_list": [ 00:19:57.801 { 00:19:57.801 "name": "spare", 00:19:57.801 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:57.801 "is_configured": true, 00:19:57.801 "data_offset": 2048, 00:19:57.801 "data_size": 63488 00:19:57.801 }, 00:19:57.801 { 00:19:57.801 "name": null, 00:19:57.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.801 "is_configured": false, 00:19:57.801 "data_offset": 2048, 00:19:57.801 "data_size": 63488 00:19:57.801 }, 00:19:57.801 { 00:19:57.801 "name": "BaseBdev3", 00:19:57.801 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:57.801 "is_configured": true, 00:19:57.801 "data_offset": 2048, 00:19:57.801 "data_size": 63488 00:19:57.801 }, 00:19:57.801 { 00:19:57.801 "name": "BaseBdev4", 00:19:57.801 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:57.801 "is_configured": true, 00:19:57.801 "data_offset": 2048, 00:19:57.801 "data_size": 63488 00:19:57.801 } 00:19:57.801 ] 00:19:57.801 }' 00:19:57.801 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.801 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.366 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.366 "name": "raid_bdev1", 00:19:58.366 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:58.366 "strip_size_kb": 0, 00:19:58.366 "state": "online", 00:19:58.366 "raid_level": "raid1", 00:19:58.366 "superblock": true, 00:19:58.366 "num_base_bdevs": 4, 00:19:58.366 "num_base_bdevs_discovered": 3, 00:19:58.366 "num_base_bdevs_operational": 3, 00:19:58.366 "base_bdevs_list": [ 00:19:58.366 { 00:19:58.366 "name": "spare", 00:19:58.366 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:19:58.366 "is_configured": true, 00:19:58.367 "data_offset": 2048, 00:19:58.367 "data_size": 63488 00:19:58.367 }, 00:19:58.367 { 00:19:58.367 "name": null, 00:19:58.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.367 "is_configured": false, 00:19:58.367 "data_offset": 2048, 00:19:58.367 "data_size": 63488 00:19:58.367 }, 00:19:58.367 { 00:19:58.367 "name": "BaseBdev3", 00:19:58.367 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:58.367 "is_configured": true, 00:19:58.367 "data_offset": 2048, 00:19:58.367 "data_size": 63488 00:19:58.367 }, 00:19:58.367 { 00:19:58.367 "name": "BaseBdev4", 00:19:58.367 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:58.367 "is_configured": true, 00:19:58.367 "data_offset": 2048, 00:19:58.367 "data_size": 63488 00:19:58.367 } 00:19:58.367 ] 00:19:58.367 }' 00:19:58.367 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.367 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.367 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.367 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.367 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.367 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:58.367 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.367 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.625 04:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.625 [2024-11-27 04:41:46.029454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.625 "name": "raid_bdev1", 00:19:58.625 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:19:58.625 "strip_size_kb": 0, 00:19:58.625 "state": "online", 00:19:58.625 "raid_level": "raid1", 00:19:58.625 "superblock": true, 00:19:58.625 "num_base_bdevs": 4, 00:19:58.625 "num_base_bdevs_discovered": 2, 00:19:58.625 "num_base_bdevs_operational": 2, 00:19:58.625 "base_bdevs_list": [ 00:19:58.625 { 00:19:58.625 "name": null, 00:19:58.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.625 "is_configured": false, 00:19:58.625 "data_offset": 0, 00:19:58.625 "data_size": 63488 00:19:58.625 }, 00:19:58.625 { 00:19:58.625 "name": null, 00:19:58.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.625 "is_configured": false, 00:19:58.625 "data_offset": 2048, 00:19:58.625 "data_size": 63488 00:19:58.625 }, 00:19:58.625 { 00:19:58.625 "name": "BaseBdev3", 00:19:58.625 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:19:58.625 "is_configured": true, 00:19:58.625 "data_offset": 2048, 00:19:58.625 "data_size": 63488 00:19:58.625 }, 00:19:58.625 { 00:19:58.625 "name": "BaseBdev4", 00:19:58.625 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:19:58.625 "is_configured": true, 00:19:58.625 "data_offset": 2048, 00:19:58.625 "data_size": 63488 00:19:58.625 } 00:19:58.625 ] 00:19:58.625 }' 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.625 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:59.191 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:59.191 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.191 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:59.191 [2024-11-27 04:41:46.581736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:59.191 [2024-11-27 04:41:46.582190] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:59.191 [2024-11-27 04:41:46.582236] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:59.191 [2024-11-27 04:41:46.582312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:59.191 [2024-11-27 04:41:46.596621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:19:59.191 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.191 04:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:59.191 [2024-11-27 04:41:46.599541] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.211 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.211 "name": "raid_bdev1", 00:20:00.211 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:20:00.211 "strip_size_kb": 0, 00:20:00.211 "state": "online", 00:20:00.211 "raid_level": "raid1", 00:20:00.211 "superblock": true, 00:20:00.211 "num_base_bdevs": 4, 00:20:00.211 "num_base_bdevs_discovered": 3, 00:20:00.211 "num_base_bdevs_operational": 3, 00:20:00.211 "process": { 00:20:00.211 "type": "rebuild", 00:20:00.211 "target": "spare", 00:20:00.211 "progress": { 00:20:00.211 "blocks": 18432, 00:20:00.211 "percent": 29 00:20:00.211 } 00:20:00.211 }, 00:20:00.211 "base_bdevs_list": [ 00:20:00.211 { 00:20:00.211 "name": "spare", 00:20:00.211 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:20:00.211 "is_configured": true, 00:20:00.211 "data_offset": 2048, 00:20:00.211 "data_size": 63488 00:20:00.211 }, 00:20:00.211 { 00:20:00.211 "name": null, 00:20:00.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.211 "is_configured": false, 00:20:00.211 "data_offset": 2048, 00:20:00.211 "data_size": 63488 00:20:00.211 }, 00:20:00.211 { 00:20:00.211 "name": "BaseBdev3", 00:20:00.211 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:20:00.211 "is_configured": true, 00:20:00.211 "data_offset": 2048, 00:20:00.211 "data_size": 63488 00:20:00.212 }, 00:20:00.212 { 00:20:00.212 "name": "BaseBdev4", 00:20:00.212 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:20:00.212 "is_configured": true, 00:20:00.212 "data_offset": 2048, 00:20:00.212 "data_size": 63488 00:20:00.212 } 00:20:00.212 ] 00:20:00.212 }' 00:20:00.212 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.212 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.212 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.212 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.212 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:00.212 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.212 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.212 [2024-11-27 04:41:47.758231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.212 [2024-11-27 04:41:47.812350] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:00.212 [2024-11-27 04:41:47.812522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.212 [2024-11-27 04:41:47.812555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.212 [2024-11-27 04:41:47.812579] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.469 "name": "raid_bdev1", 00:20:00.469 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:20:00.469 "strip_size_kb": 0, 00:20:00.469 "state": "online", 00:20:00.469 "raid_level": "raid1", 00:20:00.469 "superblock": true, 00:20:00.469 "num_base_bdevs": 4, 00:20:00.469 "num_base_bdevs_discovered": 2, 00:20:00.469 "num_base_bdevs_operational": 2, 00:20:00.469 "base_bdevs_list": [ 00:20:00.469 { 00:20:00.469 "name": null, 00:20:00.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.469 "is_configured": false, 00:20:00.469 "data_offset": 0, 00:20:00.469 "data_size": 63488 00:20:00.469 }, 00:20:00.469 { 00:20:00.469 "name": null, 00:20:00.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.469 "is_configured": false, 00:20:00.469 "data_offset": 2048, 00:20:00.469 "data_size": 63488 00:20:00.469 }, 00:20:00.469 { 00:20:00.469 "name": "BaseBdev3", 00:20:00.469 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:20:00.469 "is_configured": true, 00:20:00.469 "data_offset": 2048, 00:20:00.469 "data_size": 63488 00:20:00.469 }, 00:20:00.469 { 00:20:00.469 "name": "BaseBdev4", 00:20:00.469 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:20:00.469 "is_configured": true, 00:20:00.469 "data_offset": 2048, 00:20:00.469 "data_size": 63488 00:20:00.469 } 00:20:00.469 ] 00:20:00.469 }' 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.469 04:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.727 04:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:00.727 04:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.727 04:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.985 [2024-11-27 04:41:48.349901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:00.985 [2024-11-27 04:41:48.350042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.985 [2024-11-27 04:41:48.350108] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:00.985 [2024-11-27 04:41:48.350137] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.985 [2024-11-27 04:41:48.350918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.985 [2024-11-27 04:41:48.350974] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:00.985 [2024-11-27 04:41:48.351127] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:00.985 [2024-11-27 04:41:48.351160] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:00.985 [2024-11-27 04:41:48.351177] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:00.985 [2024-11-27 04:41:48.351221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.985 [2024-11-27 04:41:48.366033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:20:00.985 spare 00:20:00.985 04:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.985 04:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:00.985 [2024-11-27 04:41:48.368759] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.916 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.916 "name": "raid_bdev1", 00:20:01.916 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:20:01.916 "strip_size_kb": 0, 00:20:01.916 "state": "online", 00:20:01.916 "raid_level": "raid1", 00:20:01.916 "superblock": true, 00:20:01.916 "num_base_bdevs": 4, 00:20:01.916 "num_base_bdevs_discovered": 3, 00:20:01.916 "num_base_bdevs_operational": 3, 00:20:01.916 "process": { 00:20:01.916 "type": "rebuild", 00:20:01.916 "target": "spare", 00:20:01.916 "progress": { 00:20:01.916 "blocks": 18432, 00:20:01.916 "percent": 29 00:20:01.916 } 00:20:01.916 }, 00:20:01.916 "base_bdevs_list": [ 00:20:01.916 { 00:20:01.916 "name": "spare", 00:20:01.916 "uuid": "71a65e8f-1e4f-51d5-8a5a-a6188770d071", 00:20:01.916 "is_configured": true, 00:20:01.916 "data_offset": 2048, 00:20:01.916 "data_size": 63488 00:20:01.916 }, 00:20:01.916 { 00:20:01.916 "name": null, 00:20:01.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.916 "is_configured": false, 00:20:01.917 "data_offset": 2048, 00:20:01.917 "data_size": 63488 00:20:01.917 }, 00:20:01.917 { 00:20:01.917 "name": "BaseBdev3", 00:20:01.917 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:20:01.917 "is_configured": true, 00:20:01.917 "data_offset": 2048, 00:20:01.917 "data_size": 63488 00:20:01.917 }, 00:20:01.917 { 00:20:01.917 "name": "BaseBdev4", 00:20:01.917 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:20:01.917 "is_configured": true, 00:20:01.917 "data_offset": 2048, 00:20:01.917 "data_size": 63488 00:20:01.917 } 00:20:01.917 ] 00:20:01.917 }' 00:20:01.917 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.917 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.917 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.917 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.917 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:01.917 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.917 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.917 [2024-11-27 04:41:49.535427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.174 [2024-11-27 04:41:49.581742] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:02.174 [2024-11-27 04:41:49.581908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.174 [2024-11-27 04:41:49.581950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.174 [2024-11-27 04:41:49.581974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.174 "name": "raid_bdev1", 00:20:02.174 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:20:02.174 "strip_size_kb": 0, 00:20:02.174 "state": "online", 00:20:02.174 "raid_level": "raid1", 00:20:02.174 "superblock": true, 00:20:02.174 "num_base_bdevs": 4, 00:20:02.174 "num_base_bdevs_discovered": 2, 00:20:02.174 "num_base_bdevs_operational": 2, 00:20:02.174 "base_bdevs_list": [ 00:20:02.174 { 00:20:02.174 "name": null, 00:20:02.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.174 "is_configured": false, 00:20:02.174 "data_offset": 0, 00:20:02.174 "data_size": 63488 00:20:02.174 }, 00:20:02.174 { 00:20:02.174 "name": null, 00:20:02.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.174 "is_configured": false, 00:20:02.174 "data_offset": 2048, 00:20:02.174 "data_size": 63488 00:20:02.174 }, 00:20:02.174 { 00:20:02.174 "name": "BaseBdev3", 00:20:02.174 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:20:02.174 "is_configured": true, 00:20:02.174 "data_offset": 2048, 00:20:02.174 "data_size": 63488 00:20:02.174 }, 00:20:02.174 { 00:20:02.174 "name": "BaseBdev4", 00:20:02.174 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:20:02.174 "is_configured": true, 00:20:02.174 "data_offset": 2048, 00:20:02.174 "data_size": 63488 00:20:02.174 } 00:20:02.174 ] 00:20:02.174 }' 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.174 04:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.738 "name": "raid_bdev1", 00:20:02.738 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:20:02.738 "strip_size_kb": 0, 00:20:02.738 "state": "online", 00:20:02.738 "raid_level": "raid1", 00:20:02.738 "superblock": true, 00:20:02.738 "num_base_bdevs": 4, 00:20:02.738 "num_base_bdevs_discovered": 2, 00:20:02.738 "num_base_bdevs_operational": 2, 00:20:02.738 "base_bdevs_list": [ 00:20:02.738 { 00:20:02.738 "name": null, 00:20:02.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.738 "is_configured": false, 00:20:02.738 "data_offset": 0, 00:20:02.738 "data_size": 63488 00:20:02.738 }, 00:20:02.738 { 00:20:02.738 "name": null, 00:20:02.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.738 "is_configured": false, 00:20:02.738 "data_offset": 2048, 00:20:02.738 "data_size": 63488 00:20:02.738 }, 00:20:02.738 { 00:20:02.738 "name": "BaseBdev3", 00:20:02.738 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:20:02.738 "is_configured": true, 00:20:02.738 "data_offset": 2048, 00:20:02.738 "data_size": 63488 00:20:02.738 }, 00:20:02.738 { 00:20:02.738 "name": "BaseBdev4", 00:20:02.738 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:20:02.738 "is_configured": true, 00:20:02.738 "data_offset": 2048, 00:20:02.738 "data_size": 63488 00:20:02.738 } 00:20:02.738 ] 00:20:02.738 }' 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.738 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 [2024-11-27 04:41:50.303763] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:02.738 [2024-11-27 04:41:50.303909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.738 [2024-11-27 04:41:50.303951] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:20:02.738 [2024-11-27 04:41:50.303970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.739 [2024-11-27 04:41:50.304728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.739 [2024-11-27 04:41:50.304813] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:02.739 [2024-11-27 04:41:50.304968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:02.739 [2024-11-27 04:41:50.304995] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:02.739 [2024-11-27 04:41:50.305018] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:02.739 [2024-11-27 04:41:50.305050] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:02.739 BaseBdev1 00:20:02.739 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.739 04:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:03.706 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:03.706 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.706 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.706 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.707 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.707 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.707 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.707 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.707 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.707 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.707 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.707 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.707 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.707 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:03.965 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.965 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.965 "name": "raid_bdev1", 00:20:03.965 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:20:03.965 "strip_size_kb": 0, 00:20:03.965 "state": "online", 00:20:03.965 "raid_level": "raid1", 00:20:03.965 "superblock": true, 00:20:03.965 "num_base_bdevs": 4, 00:20:03.965 "num_base_bdevs_discovered": 2, 00:20:03.965 "num_base_bdevs_operational": 2, 00:20:03.965 "base_bdevs_list": [ 00:20:03.965 { 00:20:03.965 "name": null, 00:20:03.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.965 "is_configured": false, 00:20:03.965 "data_offset": 0, 00:20:03.965 "data_size": 63488 00:20:03.965 }, 00:20:03.965 { 00:20:03.965 "name": null, 00:20:03.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.965 "is_configured": false, 00:20:03.965 "data_offset": 2048, 00:20:03.965 "data_size": 63488 00:20:03.965 }, 00:20:03.965 { 00:20:03.965 "name": "BaseBdev3", 00:20:03.965 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:20:03.965 "is_configured": true, 00:20:03.965 "data_offset": 2048, 00:20:03.965 "data_size": 63488 00:20:03.965 }, 00:20:03.965 { 00:20:03.965 "name": "BaseBdev4", 00:20:03.965 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:20:03.965 "is_configured": true, 00:20:03.965 "data_offset": 2048, 00:20:03.965 "data_size": 63488 00:20:03.965 } 00:20:03.965 ] 00:20:03.965 }' 00:20:03.965 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.965 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.224 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:04.224 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.224 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:04.224 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:04.224 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.224 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.224 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.224 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.224 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.483 "name": "raid_bdev1", 00:20:04.483 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:20:04.483 "strip_size_kb": 0, 00:20:04.483 "state": "online", 00:20:04.483 "raid_level": "raid1", 00:20:04.483 "superblock": true, 00:20:04.483 "num_base_bdevs": 4, 00:20:04.483 "num_base_bdevs_discovered": 2, 00:20:04.483 "num_base_bdevs_operational": 2, 00:20:04.483 "base_bdevs_list": [ 00:20:04.483 { 00:20:04.483 "name": null, 00:20:04.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.483 "is_configured": false, 00:20:04.483 "data_offset": 0, 00:20:04.483 "data_size": 63488 00:20:04.483 }, 00:20:04.483 { 00:20:04.483 "name": null, 00:20:04.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.483 "is_configured": false, 00:20:04.483 "data_offset": 2048, 00:20:04.483 "data_size": 63488 00:20:04.483 }, 00:20:04.483 { 00:20:04.483 "name": "BaseBdev3", 00:20:04.483 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:20:04.483 "is_configured": true, 00:20:04.483 "data_offset": 2048, 00:20:04.483 "data_size": 63488 00:20:04.483 }, 00:20:04.483 { 00:20:04.483 "name": "BaseBdev4", 00:20:04.483 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:20:04.483 "is_configured": true, 00:20:04.483 "data_offset": 2048, 00:20:04.483 "data_size": 63488 00:20:04.483 } 00:20:04.483 ] 00:20:04.483 }' 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.483 04:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.483 [2024-11-27 04:41:52.004813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.483 [2024-11-27 04:41:52.005196] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:04.483 [2024-11-27 04:41:52.005233] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:04.483 request: 00:20:04.483 { 00:20:04.483 "base_bdev": "BaseBdev1", 00:20:04.483 "raid_bdev": "raid_bdev1", 00:20:04.483 "method": "bdev_raid_add_base_bdev", 00:20:04.483 "req_id": 1 00:20:04.483 } 00:20:04.483 Got JSON-RPC error response 00:20:04.483 response: 00:20:04.483 { 00:20:04.483 "code": -22, 00:20:04.483 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:04.483 } 00:20:04.483 04:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:04.483 04:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:20:04.483 04:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.483 04:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.483 04:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.483 04:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.418 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.678 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.678 "name": "raid_bdev1", 00:20:05.678 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:20:05.678 "strip_size_kb": 0, 00:20:05.678 "state": "online", 00:20:05.678 "raid_level": "raid1", 00:20:05.678 "superblock": true, 00:20:05.678 "num_base_bdevs": 4, 00:20:05.678 "num_base_bdevs_discovered": 2, 00:20:05.678 "num_base_bdevs_operational": 2, 00:20:05.678 "base_bdevs_list": [ 00:20:05.678 { 00:20:05.678 "name": null, 00:20:05.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.678 "is_configured": false, 00:20:05.678 "data_offset": 0, 00:20:05.678 "data_size": 63488 00:20:05.678 }, 00:20:05.678 { 00:20:05.678 "name": null, 00:20:05.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.678 "is_configured": false, 00:20:05.678 "data_offset": 2048, 00:20:05.678 "data_size": 63488 00:20:05.678 }, 00:20:05.678 { 00:20:05.678 "name": "BaseBdev3", 00:20:05.678 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:20:05.678 "is_configured": true, 00:20:05.678 "data_offset": 2048, 00:20:05.678 "data_size": 63488 00:20:05.678 }, 00:20:05.678 { 00:20:05.678 "name": "BaseBdev4", 00:20:05.678 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:20:05.678 "is_configured": true, 00:20:05.678 "data_offset": 2048, 00:20:05.678 "data_size": 63488 00:20:05.678 } 00:20:05.678 ] 00:20:05.678 }' 00:20:05.678 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.678 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.936 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:05.936 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.936 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:05.936 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:05.936 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.936 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.936 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.936 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.936 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.936 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.195 "name": "raid_bdev1", 00:20:06.195 "uuid": "0391649c-d5e3-4209-b774-c151d610f839", 00:20:06.195 "strip_size_kb": 0, 00:20:06.195 "state": "online", 00:20:06.195 "raid_level": "raid1", 00:20:06.195 "superblock": true, 00:20:06.195 "num_base_bdevs": 4, 00:20:06.195 "num_base_bdevs_discovered": 2, 00:20:06.195 "num_base_bdevs_operational": 2, 00:20:06.195 "base_bdevs_list": [ 00:20:06.195 { 00:20:06.195 "name": null, 00:20:06.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.195 "is_configured": false, 00:20:06.195 "data_offset": 0, 00:20:06.195 "data_size": 63488 00:20:06.195 }, 00:20:06.195 { 00:20:06.195 "name": null, 00:20:06.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.195 "is_configured": false, 00:20:06.195 "data_offset": 2048, 00:20:06.195 "data_size": 63488 00:20:06.195 }, 00:20:06.195 { 00:20:06.195 "name": "BaseBdev3", 00:20:06.195 "uuid": "15178d7e-bc42-5471-afb2-91e8f387d048", 00:20:06.195 "is_configured": true, 00:20:06.195 "data_offset": 2048, 00:20:06.195 "data_size": 63488 00:20:06.195 }, 00:20:06.195 { 00:20:06.195 "name": "BaseBdev4", 00:20:06.195 "uuid": "94f8ff3d-043f-59fa-b060-5e600c1833b0", 00:20:06.195 "is_configured": true, 00:20:06.195 "data_offset": 2048, 00:20:06.195 "data_size": 63488 00:20:06.195 } 00:20:06.195 ] 00:20:06.195 }' 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79634 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79634 ']' 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79634 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79634 00:20:06.195 killing process with pid 79634 00:20:06.195 Received shutdown signal, test time was about 19.285386 seconds 00:20:06.195 00:20:06.195 Latency(us) 00:20:06.195 [2024-11-27T04:41:53.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.195 [2024-11-27T04:41:53.818Z] =================================================================================================================== 00:20:06.195 [2024-11-27T04:41:53.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79634' 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79634 00:20:06.195 04:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79634 00:20:06.195 [2024-11-27 04:41:53.724624] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.195 [2024-11-27 04:41:53.724971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.195 [2024-11-27 04:41:53.725091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.195 [2024-11-27 04:41:53.725129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:06.762 [2024-11-27 04:41:54.150412] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:08.137 04:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:08.137 00:20:08.137 real 0m23.210s 00:20:08.137 user 0m31.454s 00:20:08.137 sys 0m2.417s 00:20:08.137 04:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.137 ************************************ 00:20:08.137 END TEST raid_rebuild_test_sb_io 00:20:08.137 ************************************ 00:20:08.137 04:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:08.137 04:41:55 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:20:08.137 04:41:55 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:20:08.137 04:41:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:08.137 04:41:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.137 04:41:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:08.137 ************************************ 00:20:08.137 START TEST raid5f_state_function_test 00:20:08.137 ************************************ 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:08.137 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:08.138 Process raid pid: 80373 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80373 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80373' 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80373 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80373 ']' 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.138 04:41:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.138 [2024-11-27 04:41:55.586392] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:08.138 [2024-11-27 04:41:55.586589] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.396 [2024-11-27 04:41:55.779102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.397 [2024-11-27 04:41:55.915946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.656 [2024-11-27 04:41:56.130768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.656 [2024-11-27 04:41:56.130837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.223 04:41:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.223 04:41:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:09.223 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:09.223 04:41:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.223 04:41:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.223 [2024-11-27 04:41:56.660701] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:09.223 [2024-11-27 04:41:56.661423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:09.224 [2024-11-27 04:41:56.661456] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.224 [2024-11-27 04:41:56.661483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.224 [2024-11-27 04:41:56.661501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:09.224 [2024-11-27 04:41:56.661516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.224 "name": "Existed_Raid", 00:20:09.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.224 "strip_size_kb": 64, 00:20:09.224 "state": "configuring", 00:20:09.224 "raid_level": "raid5f", 00:20:09.224 "superblock": false, 00:20:09.224 "num_base_bdevs": 3, 00:20:09.224 "num_base_bdevs_discovered": 0, 00:20:09.224 "num_base_bdevs_operational": 3, 00:20:09.224 "base_bdevs_list": [ 00:20:09.224 { 00:20:09.224 "name": "BaseBdev1", 00:20:09.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.224 "is_configured": false, 00:20:09.224 "data_offset": 0, 00:20:09.224 "data_size": 0 00:20:09.224 }, 00:20:09.224 { 00:20:09.224 "name": "BaseBdev2", 00:20:09.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.224 "is_configured": false, 00:20:09.224 "data_offset": 0, 00:20:09.224 "data_size": 0 00:20:09.224 }, 00:20:09.224 { 00:20:09.224 "name": "BaseBdev3", 00:20:09.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.224 "is_configured": false, 00:20:09.224 "data_offset": 0, 00:20:09.224 "data_size": 0 00:20:09.224 } 00:20:09.224 ] 00:20:09.224 }' 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.224 04:41:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.790 [2024-11-27 04:41:57.188795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:09.790 [2024-11-27 04:41:57.188995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.790 [2024-11-27 04:41:57.196754] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:09.790 [2024-11-27 04:41:57.196837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:09.790 [2024-11-27 04:41:57.196853] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.790 [2024-11-27 04:41:57.196870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.790 [2024-11-27 04:41:57.196880] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:09.790 [2024-11-27 04:41:57.196893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.790 [2024-11-27 04:41:57.242493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:09.790 BaseBdev1 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:09.790 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.791 [ 00:20:09.791 { 00:20:09.791 "name": "BaseBdev1", 00:20:09.791 "aliases": [ 00:20:09.791 "90bd518e-d8cf-4bfb-9438-7ae752767ccc" 00:20:09.791 ], 00:20:09.791 "product_name": "Malloc disk", 00:20:09.791 "block_size": 512, 00:20:09.791 "num_blocks": 65536, 00:20:09.791 "uuid": "90bd518e-d8cf-4bfb-9438-7ae752767ccc", 00:20:09.791 "assigned_rate_limits": { 00:20:09.791 "rw_ios_per_sec": 0, 00:20:09.791 "rw_mbytes_per_sec": 0, 00:20:09.791 "r_mbytes_per_sec": 0, 00:20:09.791 "w_mbytes_per_sec": 0 00:20:09.791 }, 00:20:09.791 "claimed": true, 00:20:09.791 "claim_type": "exclusive_write", 00:20:09.791 "zoned": false, 00:20:09.791 "supported_io_types": { 00:20:09.791 "read": true, 00:20:09.791 "write": true, 00:20:09.791 "unmap": true, 00:20:09.791 "flush": true, 00:20:09.791 "reset": true, 00:20:09.791 "nvme_admin": false, 00:20:09.791 "nvme_io": false, 00:20:09.791 "nvme_io_md": false, 00:20:09.791 "write_zeroes": true, 00:20:09.791 "zcopy": true, 00:20:09.791 "get_zone_info": false, 00:20:09.791 "zone_management": false, 00:20:09.791 "zone_append": false, 00:20:09.791 "compare": false, 00:20:09.791 "compare_and_write": false, 00:20:09.791 "abort": true, 00:20:09.791 "seek_hole": false, 00:20:09.791 "seek_data": false, 00:20:09.791 "copy": true, 00:20:09.791 "nvme_iov_md": false 00:20:09.791 }, 00:20:09.791 "memory_domains": [ 00:20:09.791 { 00:20:09.791 "dma_device_id": "system", 00:20:09.791 "dma_device_type": 1 00:20:09.791 }, 00:20:09.791 { 00:20:09.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.791 "dma_device_type": 2 00:20:09.791 } 00:20:09.791 ], 00:20:09.791 "driver_specific": {} 00:20:09.791 } 00:20:09.791 ] 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.791 "name": "Existed_Raid", 00:20:09.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.791 "strip_size_kb": 64, 00:20:09.791 "state": "configuring", 00:20:09.791 "raid_level": "raid5f", 00:20:09.791 "superblock": false, 00:20:09.791 "num_base_bdevs": 3, 00:20:09.791 "num_base_bdevs_discovered": 1, 00:20:09.791 "num_base_bdevs_operational": 3, 00:20:09.791 "base_bdevs_list": [ 00:20:09.791 { 00:20:09.791 "name": "BaseBdev1", 00:20:09.791 "uuid": "90bd518e-d8cf-4bfb-9438-7ae752767ccc", 00:20:09.791 "is_configured": true, 00:20:09.791 "data_offset": 0, 00:20:09.791 "data_size": 65536 00:20:09.791 }, 00:20:09.791 { 00:20:09.791 "name": "BaseBdev2", 00:20:09.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.791 "is_configured": false, 00:20:09.791 "data_offset": 0, 00:20:09.791 "data_size": 0 00:20:09.791 }, 00:20:09.791 { 00:20:09.791 "name": "BaseBdev3", 00:20:09.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.791 "is_configured": false, 00:20:09.791 "data_offset": 0, 00:20:09.791 "data_size": 0 00:20:09.791 } 00:20:09.791 ] 00:20:09.791 }' 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.791 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.357 [2024-11-27 04:41:57.818925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:10.357 [2024-11-27 04:41:57.819049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.357 [2024-11-27 04:41:57.826963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.357 [2024-11-27 04:41:57.829915] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.357 [2024-11-27 04:41:57.830006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.357 [2024-11-27 04:41:57.830028] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.357 [2024-11-27 04:41:57.830047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.357 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.357 "name": "Existed_Raid", 00:20:10.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.357 "strip_size_kb": 64, 00:20:10.357 "state": "configuring", 00:20:10.357 "raid_level": "raid5f", 00:20:10.357 "superblock": false, 00:20:10.357 "num_base_bdevs": 3, 00:20:10.357 "num_base_bdevs_discovered": 1, 00:20:10.357 "num_base_bdevs_operational": 3, 00:20:10.357 "base_bdevs_list": [ 00:20:10.357 { 00:20:10.357 "name": "BaseBdev1", 00:20:10.357 "uuid": "90bd518e-d8cf-4bfb-9438-7ae752767ccc", 00:20:10.357 "is_configured": true, 00:20:10.357 "data_offset": 0, 00:20:10.357 "data_size": 65536 00:20:10.357 }, 00:20:10.357 { 00:20:10.357 "name": "BaseBdev2", 00:20:10.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.357 "is_configured": false, 00:20:10.357 "data_offset": 0, 00:20:10.357 "data_size": 0 00:20:10.357 }, 00:20:10.357 { 00:20:10.357 "name": "BaseBdev3", 00:20:10.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.358 "is_configured": false, 00:20:10.358 "data_offset": 0, 00:20:10.358 "data_size": 0 00:20:10.358 } 00:20:10.358 ] 00:20:10.358 }' 00:20:10.358 04:41:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.358 04:41:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.925 [2024-11-27 04:41:58.412148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.925 BaseBdev2 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.925 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.925 [ 00:20:10.925 { 00:20:10.926 "name": "BaseBdev2", 00:20:10.926 "aliases": [ 00:20:10.926 "096881f5-a8e9-4124-b89f-c7f73647bd4b" 00:20:10.926 ], 00:20:10.926 "product_name": "Malloc disk", 00:20:10.926 "block_size": 512, 00:20:10.926 "num_blocks": 65536, 00:20:10.926 "uuid": "096881f5-a8e9-4124-b89f-c7f73647bd4b", 00:20:10.926 "assigned_rate_limits": { 00:20:10.926 "rw_ios_per_sec": 0, 00:20:10.926 "rw_mbytes_per_sec": 0, 00:20:10.926 "r_mbytes_per_sec": 0, 00:20:10.926 "w_mbytes_per_sec": 0 00:20:10.926 }, 00:20:10.926 "claimed": true, 00:20:10.926 "claim_type": "exclusive_write", 00:20:10.926 "zoned": false, 00:20:10.926 "supported_io_types": { 00:20:10.926 "read": true, 00:20:10.926 "write": true, 00:20:10.926 "unmap": true, 00:20:10.926 "flush": true, 00:20:10.926 "reset": true, 00:20:10.926 "nvme_admin": false, 00:20:10.926 "nvme_io": false, 00:20:10.926 "nvme_io_md": false, 00:20:10.926 "write_zeroes": true, 00:20:10.926 "zcopy": true, 00:20:10.926 "get_zone_info": false, 00:20:10.926 "zone_management": false, 00:20:10.926 "zone_append": false, 00:20:10.926 "compare": false, 00:20:10.926 "compare_and_write": false, 00:20:10.926 "abort": true, 00:20:10.926 "seek_hole": false, 00:20:10.926 "seek_data": false, 00:20:10.926 "copy": true, 00:20:10.926 "nvme_iov_md": false 00:20:10.926 }, 00:20:10.926 "memory_domains": [ 00:20:10.926 { 00:20:10.926 "dma_device_id": "system", 00:20:10.926 "dma_device_type": 1 00:20:10.926 }, 00:20:10.926 { 00:20:10.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.926 "dma_device_type": 2 00:20:10.926 } 00:20:10.926 ], 00:20:10.926 "driver_specific": {} 00:20:10.926 } 00:20:10.926 ] 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.926 "name": "Existed_Raid", 00:20:10.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.926 "strip_size_kb": 64, 00:20:10.926 "state": "configuring", 00:20:10.926 "raid_level": "raid5f", 00:20:10.926 "superblock": false, 00:20:10.926 "num_base_bdevs": 3, 00:20:10.926 "num_base_bdevs_discovered": 2, 00:20:10.926 "num_base_bdevs_operational": 3, 00:20:10.926 "base_bdevs_list": [ 00:20:10.926 { 00:20:10.926 "name": "BaseBdev1", 00:20:10.926 "uuid": "90bd518e-d8cf-4bfb-9438-7ae752767ccc", 00:20:10.926 "is_configured": true, 00:20:10.926 "data_offset": 0, 00:20:10.926 "data_size": 65536 00:20:10.926 }, 00:20:10.926 { 00:20:10.926 "name": "BaseBdev2", 00:20:10.926 "uuid": "096881f5-a8e9-4124-b89f-c7f73647bd4b", 00:20:10.926 "is_configured": true, 00:20:10.926 "data_offset": 0, 00:20:10.926 "data_size": 65536 00:20:10.926 }, 00:20:10.926 { 00:20:10.926 "name": "BaseBdev3", 00:20:10.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.926 "is_configured": false, 00:20:10.926 "data_offset": 0, 00:20:10.926 "data_size": 0 00:20:10.926 } 00:20:10.926 ] 00:20:10.926 }' 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.926 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.559 04:41:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:11.559 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.559 04:41:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.559 [2024-11-27 04:41:59.012113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.559 [2024-11-27 04:41:59.012255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:11.559 [2024-11-27 04:41:59.012284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:11.559 [2024-11-27 04:41:59.012665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:11.559 [2024-11-27 04:41:59.018367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:11.559 [2024-11-27 04:41:59.018422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:11.559 [2024-11-27 04:41:59.018948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.559 BaseBdev3 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.559 [ 00:20:11.559 { 00:20:11.559 "name": "BaseBdev3", 00:20:11.559 "aliases": [ 00:20:11.559 "53beed33-ff41-428e-aa90-e9dcc3a33aef" 00:20:11.559 ], 00:20:11.559 "product_name": "Malloc disk", 00:20:11.559 "block_size": 512, 00:20:11.559 "num_blocks": 65536, 00:20:11.559 "uuid": "53beed33-ff41-428e-aa90-e9dcc3a33aef", 00:20:11.559 "assigned_rate_limits": { 00:20:11.559 "rw_ios_per_sec": 0, 00:20:11.559 "rw_mbytes_per_sec": 0, 00:20:11.559 "r_mbytes_per_sec": 0, 00:20:11.559 "w_mbytes_per_sec": 0 00:20:11.559 }, 00:20:11.559 "claimed": true, 00:20:11.559 "claim_type": "exclusive_write", 00:20:11.559 "zoned": false, 00:20:11.559 "supported_io_types": { 00:20:11.559 "read": true, 00:20:11.559 "write": true, 00:20:11.559 "unmap": true, 00:20:11.559 "flush": true, 00:20:11.559 "reset": true, 00:20:11.559 "nvme_admin": false, 00:20:11.559 "nvme_io": false, 00:20:11.559 "nvme_io_md": false, 00:20:11.559 "write_zeroes": true, 00:20:11.559 "zcopy": true, 00:20:11.559 "get_zone_info": false, 00:20:11.559 "zone_management": false, 00:20:11.559 "zone_append": false, 00:20:11.559 "compare": false, 00:20:11.559 "compare_and_write": false, 00:20:11.559 "abort": true, 00:20:11.559 "seek_hole": false, 00:20:11.559 "seek_data": false, 00:20:11.559 "copy": true, 00:20:11.559 "nvme_iov_md": false 00:20:11.559 }, 00:20:11.559 "memory_domains": [ 00:20:11.559 { 00:20:11.559 "dma_device_id": "system", 00:20:11.559 "dma_device_type": 1 00:20:11.559 }, 00:20:11.559 { 00:20:11.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.559 "dma_device_type": 2 00:20:11.559 } 00:20:11.559 ], 00:20:11.559 "driver_specific": {} 00:20:11.559 } 00:20:11.559 ] 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.559 "name": "Existed_Raid", 00:20:11.559 "uuid": "d460334b-dd42-40c3-b9ac-1b754304770e", 00:20:11.559 "strip_size_kb": 64, 00:20:11.559 "state": "online", 00:20:11.559 "raid_level": "raid5f", 00:20:11.559 "superblock": false, 00:20:11.559 "num_base_bdevs": 3, 00:20:11.559 "num_base_bdevs_discovered": 3, 00:20:11.559 "num_base_bdevs_operational": 3, 00:20:11.559 "base_bdevs_list": [ 00:20:11.559 { 00:20:11.559 "name": "BaseBdev1", 00:20:11.559 "uuid": "90bd518e-d8cf-4bfb-9438-7ae752767ccc", 00:20:11.559 "is_configured": true, 00:20:11.559 "data_offset": 0, 00:20:11.559 "data_size": 65536 00:20:11.559 }, 00:20:11.559 { 00:20:11.559 "name": "BaseBdev2", 00:20:11.559 "uuid": "096881f5-a8e9-4124-b89f-c7f73647bd4b", 00:20:11.559 "is_configured": true, 00:20:11.559 "data_offset": 0, 00:20:11.559 "data_size": 65536 00:20:11.559 }, 00:20:11.559 { 00:20:11.559 "name": "BaseBdev3", 00:20:11.559 "uuid": "53beed33-ff41-428e-aa90-e9dcc3a33aef", 00:20:11.559 "is_configured": true, 00:20:11.559 "data_offset": 0, 00:20:11.559 "data_size": 65536 00:20:11.559 } 00:20:11.559 ] 00:20:11.559 }' 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.559 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.125 [2024-11-27 04:41:59.577810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.125 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:12.125 "name": "Existed_Raid", 00:20:12.125 "aliases": [ 00:20:12.125 "d460334b-dd42-40c3-b9ac-1b754304770e" 00:20:12.125 ], 00:20:12.125 "product_name": "Raid Volume", 00:20:12.125 "block_size": 512, 00:20:12.125 "num_blocks": 131072, 00:20:12.125 "uuid": "d460334b-dd42-40c3-b9ac-1b754304770e", 00:20:12.125 "assigned_rate_limits": { 00:20:12.125 "rw_ios_per_sec": 0, 00:20:12.125 "rw_mbytes_per_sec": 0, 00:20:12.125 "r_mbytes_per_sec": 0, 00:20:12.125 "w_mbytes_per_sec": 0 00:20:12.125 }, 00:20:12.125 "claimed": false, 00:20:12.125 "zoned": false, 00:20:12.125 "supported_io_types": { 00:20:12.125 "read": true, 00:20:12.125 "write": true, 00:20:12.125 "unmap": false, 00:20:12.125 "flush": false, 00:20:12.125 "reset": true, 00:20:12.125 "nvme_admin": false, 00:20:12.125 "nvme_io": false, 00:20:12.125 "nvme_io_md": false, 00:20:12.125 "write_zeroes": true, 00:20:12.125 "zcopy": false, 00:20:12.125 "get_zone_info": false, 00:20:12.125 "zone_management": false, 00:20:12.125 "zone_append": false, 00:20:12.125 "compare": false, 00:20:12.125 "compare_and_write": false, 00:20:12.125 "abort": false, 00:20:12.125 "seek_hole": false, 00:20:12.125 "seek_data": false, 00:20:12.125 "copy": false, 00:20:12.125 "nvme_iov_md": false 00:20:12.125 }, 00:20:12.125 "driver_specific": { 00:20:12.125 "raid": { 00:20:12.125 "uuid": "d460334b-dd42-40c3-b9ac-1b754304770e", 00:20:12.126 "strip_size_kb": 64, 00:20:12.126 "state": "online", 00:20:12.126 "raid_level": "raid5f", 00:20:12.126 "superblock": false, 00:20:12.126 "num_base_bdevs": 3, 00:20:12.126 "num_base_bdevs_discovered": 3, 00:20:12.126 "num_base_bdevs_operational": 3, 00:20:12.126 "base_bdevs_list": [ 00:20:12.126 { 00:20:12.126 "name": "BaseBdev1", 00:20:12.126 "uuid": "90bd518e-d8cf-4bfb-9438-7ae752767ccc", 00:20:12.126 "is_configured": true, 00:20:12.126 "data_offset": 0, 00:20:12.126 "data_size": 65536 00:20:12.126 }, 00:20:12.126 { 00:20:12.126 "name": "BaseBdev2", 00:20:12.126 "uuid": "096881f5-a8e9-4124-b89f-c7f73647bd4b", 00:20:12.126 "is_configured": true, 00:20:12.126 "data_offset": 0, 00:20:12.126 "data_size": 65536 00:20:12.126 }, 00:20:12.126 { 00:20:12.126 "name": "BaseBdev3", 00:20:12.126 "uuid": "53beed33-ff41-428e-aa90-e9dcc3a33aef", 00:20:12.126 "is_configured": true, 00:20:12.126 "data_offset": 0, 00:20:12.126 "data_size": 65536 00:20:12.126 } 00:20:12.126 ] 00:20:12.126 } 00:20:12.126 } 00:20:12.126 }' 00:20:12.126 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:12.126 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:12.126 BaseBdev2 00:20:12.126 BaseBdev3' 00:20:12.126 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.126 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:12.126 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.126 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:12.126 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.126 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.126 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.385 [2024-11-27 04:41:59.897711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.385 04:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.385 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.385 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.385 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.643 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.643 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.643 "name": "Existed_Raid", 00:20:12.643 "uuid": "d460334b-dd42-40c3-b9ac-1b754304770e", 00:20:12.643 "strip_size_kb": 64, 00:20:12.643 "state": "online", 00:20:12.643 "raid_level": "raid5f", 00:20:12.643 "superblock": false, 00:20:12.643 "num_base_bdevs": 3, 00:20:12.643 "num_base_bdevs_discovered": 2, 00:20:12.643 "num_base_bdevs_operational": 2, 00:20:12.643 "base_bdevs_list": [ 00:20:12.643 { 00:20:12.643 "name": null, 00:20:12.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.643 "is_configured": false, 00:20:12.643 "data_offset": 0, 00:20:12.643 "data_size": 65536 00:20:12.643 }, 00:20:12.643 { 00:20:12.643 "name": "BaseBdev2", 00:20:12.643 "uuid": "096881f5-a8e9-4124-b89f-c7f73647bd4b", 00:20:12.643 "is_configured": true, 00:20:12.644 "data_offset": 0, 00:20:12.644 "data_size": 65536 00:20:12.644 }, 00:20:12.644 { 00:20:12.644 "name": "BaseBdev3", 00:20:12.644 "uuid": "53beed33-ff41-428e-aa90-e9dcc3a33aef", 00:20:12.644 "is_configured": true, 00:20:12.644 "data_offset": 0, 00:20:12.644 "data_size": 65536 00:20:12.644 } 00:20:12.644 ] 00:20:12.644 }' 00:20:12.644 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.644 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.902 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:12.902 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:12.902 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.160 [2024-11-27 04:42:00.579296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:13.160 [2024-11-27 04:42:00.579528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.160 [2024-11-27 04:42:00.677737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.160 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.160 [2024-11-27 04:42:00.737791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:13.160 [2024-11-27 04:42:00.737980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.418 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.418 BaseBdev2 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.419 [ 00:20:13.419 { 00:20:13.419 "name": "BaseBdev2", 00:20:13.419 "aliases": [ 00:20:13.419 "f2a906af-ff79-4101-bd6a-42b351280232" 00:20:13.419 ], 00:20:13.419 "product_name": "Malloc disk", 00:20:13.419 "block_size": 512, 00:20:13.419 "num_blocks": 65536, 00:20:13.419 "uuid": "f2a906af-ff79-4101-bd6a-42b351280232", 00:20:13.419 "assigned_rate_limits": { 00:20:13.419 "rw_ios_per_sec": 0, 00:20:13.419 "rw_mbytes_per_sec": 0, 00:20:13.419 "r_mbytes_per_sec": 0, 00:20:13.419 "w_mbytes_per_sec": 0 00:20:13.419 }, 00:20:13.419 "claimed": false, 00:20:13.419 "zoned": false, 00:20:13.419 "supported_io_types": { 00:20:13.419 "read": true, 00:20:13.419 "write": true, 00:20:13.419 "unmap": true, 00:20:13.419 "flush": true, 00:20:13.419 "reset": true, 00:20:13.419 "nvme_admin": false, 00:20:13.419 "nvme_io": false, 00:20:13.419 "nvme_io_md": false, 00:20:13.419 "write_zeroes": true, 00:20:13.419 "zcopy": true, 00:20:13.419 "get_zone_info": false, 00:20:13.419 "zone_management": false, 00:20:13.419 "zone_append": false, 00:20:13.419 "compare": false, 00:20:13.419 "compare_and_write": false, 00:20:13.419 "abort": true, 00:20:13.419 "seek_hole": false, 00:20:13.419 "seek_data": false, 00:20:13.419 "copy": true, 00:20:13.419 "nvme_iov_md": false 00:20:13.419 }, 00:20:13.419 "memory_domains": [ 00:20:13.419 { 00:20:13.419 "dma_device_id": "system", 00:20:13.419 "dma_device_type": 1 00:20:13.419 }, 00:20:13.419 { 00:20:13.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.419 "dma_device_type": 2 00:20:13.419 } 00:20:13.419 ], 00:20:13.419 "driver_specific": {} 00:20:13.419 } 00:20:13.419 ] 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.419 04:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.419 BaseBdev3 00:20:13.419 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.419 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:13.419 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:13.419 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.419 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:13.419 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.419 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.419 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.419 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.419 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.677 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.677 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:13.677 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.677 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.677 [ 00:20:13.677 { 00:20:13.677 "name": "BaseBdev3", 00:20:13.677 "aliases": [ 00:20:13.677 "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1" 00:20:13.677 ], 00:20:13.677 "product_name": "Malloc disk", 00:20:13.677 "block_size": 512, 00:20:13.677 "num_blocks": 65536, 00:20:13.677 "uuid": "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1", 00:20:13.677 "assigned_rate_limits": { 00:20:13.677 "rw_ios_per_sec": 0, 00:20:13.677 "rw_mbytes_per_sec": 0, 00:20:13.677 "r_mbytes_per_sec": 0, 00:20:13.677 "w_mbytes_per_sec": 0 00:20:13.678 }, 00:20:13.678 "claimed": false, 00:20:13.678 "zoned": false, 00:20:13.678 "supported_io_types": { 00:20:13.678 "read": true, 00:20:13.678 "write": true, 00:20:13.678 "unmap": true, 00:20:13.678 "flush": true, 00:20:13.678 "reset": true, 00:20:13.678 "nvme_admin": false, 00:20:13.678 "nvme_io": false, 00:20:13.678 "nvme_io_md": false, 00:20:13.678 "write_zeroes": true, 00:20:13.678 "zcopy": true, 00:20:13.678 "get_zone_info": false, 00:20:13.678 "zone_management": false, 00:20:13.678 "zone_append": false, 00:20:13.678 "compare": false, 00:20:13.678 "compare_and_write": false, 00:20:13.678 "abort": true, 00:20:13.678 "seek_hole": false, 00:20:13.678 "seek_data": false, 00:20:13.678 "copy": true, 00:20:13.678 "nvme_iov_md": false 00:20:13.678 }, 00:20:13.678 "memory_domains": [ 00:20:13.678 { 00:20:13.678 "dma_device_id": "system", 00:20:13.678 "dma_device_type": 1 00:20:13.678 }, 00:20:13.678 { 00:20:13.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.678 "dma_device_type": 2 00:20:13.678 } 00:20:13.678 ], 00:20:13.678 "driver_specific": {} 00:20:13.678 } 00:20:13.678 ] 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.678 [2024-11-27 04:42:01.072957] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:13.678 [2024-11-27 04:42:01.073372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:13.678 [2024-11-27 04:42:01.073577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.678 [2024-11-27 04:42:01.076573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.678 "name": "Existed_Raid", 00:20:13.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.678 "strip_size_kb": 64, 00:20:13.678 "state": "configuring", 00:20:13.678 "raid_level": "raid5f", 00:20:13.678 "superblock": false, 00:20:13.678 "num_base_bdevs": 3, 00:20:13.678 "num_base_bdevs_discovered": 2, 00:20:13.678 "num_base_bdevs_operational": 3, 00:20:13.678 "base_bdevs_list": [ 00:20:13.678 { 00:20:13.678 "name": "BaseBdev1", 00:20:13.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.678 "is_configured": false, 00:20:13.678 "data_offset": 0, 00:20:13.678 "data_size": 0 00:20:13.678 }, 00:20:13.678 { 00:20:13.678 "name": "BaseBdev2", 00:20:13.678 "uuid": "f2a906af-ff79-4101-bd6a-42b351280232", 00:20:13.678 "is_configured": true, 00:20:13.678 "data_offset": 0, 00:20:13.678 "data_size": 65536 00:20:13.678 }, 00:20:13.678 { 00:20:13.678 "name": "BaseBdev3", 00:20:13.678 "uuid": "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1", 00:20:13.678 "is_configured": true, 00:20:13.678 "data_offset": 0, 00:20:13.678 "data_size": 65536 00:20:13.678 } 00:20:13.678 ] 00:20:13.678 }' 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.678 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.244 [2024-11-27 04:42:01.581358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.244 "name": "Existed_Raid", 00:20:14.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.244 "strip_size_kb": 64, 00:20:14.244 "state": "configuring", 00:20:14.244 "raid_level": "raid5f", 00:20:14.244 "superblock": false, 00:20:14.244 "num_base_bdevs": 3, 00:20:14.244 "num_base_bdevs_discovered": 1, 00:20:14.244 "num_base_bdevs_operational": 3, 00:20:14.244 "base_bdevs_list": [ 00:20:14.244 { 00:20:14.244 "name": "BaseBdev1", 00:20:14.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.244 "is_configured": false, 00:20:14.244 "data_offset": 0, 00:20:14.244 "data_size": 0 00:20:14.244 }, 00:20:14.244 { 00:20:14.244 "name": null, 00:20:14.244 "uuid": "f2a906af-ff79-4101-bd6a-42b351280232", 00:20:14.244 "is_configured": false, 00:20:14.244 "data_offset": 0, 00:20:14.244 "data_size": 65536 00:20:14.244 }, 00:20:14.244 { 00:20:14.244 "name": "BaseBdev3", 00:20:14.244 "uuid": "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1", 00:20:14.244 "is_configured": true, 00:20:14.244 "data_offset": 0, 00:20:14.244 "data_size": 65536 00:20:14.244 } 00:20:14.244 ] 00:20:14.244 }' 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.244 04:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.502 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.502 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.502 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.502 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:14.502 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.760 [2024-11-27 04:42:02.198341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:14.760 BaseBdev1 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.760 [ 00:20:14.760 { 00:20:14.760 "name": "BaseBdev1", 00:20:14.760 "aliases": [ 00:20:14.760 "d4cc4e86-76ba-4dbe-b14c-5a1363a43721" 00:20:14.760 ], 00:20:14.760 "product_name": "Malloc disk", 00:20:14.760 "block_size": 512, 00:20:14.760 "num_blocks": 65536, 00:20:14.760 "uuid": "d4cc4e86-76ba-4dbe-b14c-5a1363a43721", 00:20:14.760 "assigned_rate_limits": { 00:20:14.760 "rw_ios_per_sec": 0, 00:20:14.760 "rw_mbytes_per_sec": 0, 00:20:14.760 "r_mbytes_per_sec": 0, 00:20:14.760 "w_mbytes_per_sec": 0 00:20:14.760 }, 00:20:14.760 "claimed": true, 00:20:14.760 "claim_type": "exclusive_write", 00:20:14.760 "zoned": false, 00:20:14.760 "supported_io_types": { 00:20:14.760 "read": true, 00:20:14.760 "write": true, 00:20:14.760 "unmap": true, 00:20:14.760 "flush": true, 00:20:14.760 "reset": true, 00:20:14.760 "nvme_admin": false, 00:20:14.760 "nvme_io": false, 00:20:14.760 "nvme_io_md": false, 00:20:14.760 "write_zeroes": true, 00:20:14.760 "zcopy": true, 00:20:14.760 "get_zone_info": false, 00:20:14.760 "zone_management": false, 00:20:14.760 "zone_append": false, 00:20:14.760 "compare": false, 00:20:14.760 "compare_and_write": false, 00:20:14.760 "abort": true, 00:20:14.760 "seek_hole": false, 00:20:14.760 "seek_data": false, 00:20:14.760 "copy": true, 00:20:14.760 "nvme_iov_md": false 00:20:14.760 }, 00:20:14.760 "memory_domains": [ 00:20:14.760 { 00:20:14.760 "dma_device_id": "system", 00:20:14.760 "dma_device_type": 1 00:20:14.760 }, 00:20:14.760 { 00:20:14.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.760 "dma_device_type": 2 00:20:14.760 } 00:20:14.760 ], 00:20:14.760 "driver_specific": {} 00:20:14.760 } 00:20:14.760 ] 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.760 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.761 "name": "Existed_Raid", 00:20:14.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.761 "strip_size_kb": 64, 00:20:14.761 "state": "configuring", 00:20:14.761 "raid_level": "raid5f", 00:20:14.761 "superblock": false, 00:20:14.761 "num_base_bdevs": 3, 00:20:14.761 "num_base_bdevs_discovered": 2, 00:20:14.761 "num_base_bdevs_operational": 3, 00:20:14.761 "base_bdevs_list": [ 00:20:14.761 { 00:20:14.761 "name": "BaseBdev1", 00:20:14.761 "uuid": "d4cc4e86-76ba-4dbe-b14c-5a1363a43721", 00:20:14.761 "is_configured": true, 00:20:14.761 "data_offset": 0, 00:20:14.761 "data_size": 65536 00:20:14.761 }, 00:20:14.761 { 00:20:14.761 "name": null, 00:20:14.761 "uuid": "f2a906af-ff79-4101-bd6a-42b351280232", 00:20:14.761 "is_configured": false, 00:20:14.761 "data_offset": 0, 00:20:14.761 "data_size": 65536 00:20:14.761 }, 00:20:14.761 { 00:20:14.761 "name": "BaseBdev3", 00:20:14.761 "uuid": "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1", 00:20:14.761 "is_configured": true, 00:20:14.761 "data_offset": 0, 00:20:14.761 "data_size": 65536 00:20:14.761 } 00:20:14.761 ] 00:20:14.761 }' 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.761 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.379 [2024-11-27 04:42:02.790623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.379 "name": "Existed_Raid", 00:20:15.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.379 "strip_size_kb": 64, 00:20:15.379 "state": "configuring", 00:20:15.379 "raid_level": "raid5f", 00:20:15.379 "superblock": false, 00:20:15.379 "num_base_bdevs": 3, 00:20:15.379 "num_base_bdevs_discovered": 1, 00:20:15.379 "num_base_bdevs_operational": 3, 00:20:15.379 "base_bdevs_list": [ 00:20:15.379 { 00:20:15.379 "name": "BaseBdev1", 00:20:15.379 "uuid": "d4cc4e86-76ba-4dbe-b14c-5a1363a43721", 00:20:15.379 "is_configured": true, 00:20:15.379 "data_offset": 0, 00:20:15.379 "data_size": 65536 00:20:15.379 }, 00:20:15.379 { 00:20:15.379 "name": null, 00:20:15.379 "uuid": "f2a906af-ff79-4101-bd6a-42b351280232", 00:20:15.379 "is_configured": false, 00:20:15.379 "data_offset": 0, 00:20:15.379 "data_size": 65536 00:20:15.379 }, 00:20:15.379 { 00:20:15.379 "name": null, 00:20:15.379 "uuid": "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1", 00:20:15.379 "is_configured": false, 00:20:15.379 "data_offset": 0, 00:20:15.379 "data_size": 65536 00:20:15.379 } 00:20:15.379 ] 00:20:15.379 }' 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.379 04:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.973 [2024-11-27 04:42:03.382971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.973 "name": "Existed_Raid", 00:20:15.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.973 "strip_size_kb": 64, 00:20:15.973 "state": "configuring", 00:20:15.973 "raid_level": "raid5f", 00:20:15.973 "superblock": false, 00:20:15.973 "num_base_bdevs": 3, 00:20:15.973 "num_base_bdevs_discovered": 2, 00:20:15.973 "num_base_bdevs_operational": 3, 00:20:15.973 "base_bdevs_list": [ 00:20:15.973 { 00:20:15.973 "name": "BaseBdev1", 00:20:15.973 "uuid": "d4cc4e86-76ba-4dbe-b14c-5a1363a43721", 00:20:15.973 "is_configured": true, 00:20:15.973 "data_offset": 0, 00:20:15.973 "data_size": 65536 00:20:15.973 }, 00:20:15.973 { 00:20:15.973 "name": null, 00:20:15.973 "uuid": "f2a906af-ff79-4101-bd6a-42b351280232", 00:20:15.973 "is_configured": false, 00:20:15.973 "data_offset": 0, 00:20:15.973 "data_size": 65536 00:20:15.973 }, 00:20:15.973 { 00:20:15.973 "name": "BaseBdev3", 00:20:15.973 "uuid": "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1", 00:20:15.973 "is_configured": true, 00:20:15.973 "data_offset": 0, 00:20:15.973 "data_size": 65536 00:20:15.973 } 00:20:15.973 ] 00:20:15.973 }' 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.973 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.540 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.540 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:16.540 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.540 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.540 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.540 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:16.540 04:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:16.540 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.540 04:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.540 [2024-11-27 04:42:03.923141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.540 "name": "Existed_Raid", 00:20:16.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.540 "strip_size_kb": 64, 00:20:16.540 "state": "configuring", 00:20:16.540 "raid_level": "raid5f", 00:20:16.540 "superblock": false, 00:20:16.540 "num_base_bdevs": 3, 00:20:16.540 "num_base_bdevs_discovered": 1, 00:20:16.540 "num_base_bdevs_operational": 3, 00:20:16.540 "base_bdevs_list": [ 00:20:16.540 { 00:20:16.540 "name": null, 00:20:16.540 "uuid": "d4cc4e86-76ba-4dbe-b14c-5a1363a43721", 00:20:16.540 "is_configured": false, 00:20:16.540 "data_offset": 0, 00:20:16.540 "data_size": 65536 00:20:16.540 }, 00:20:16.540 { 00:20:16.540 "name": null, 00:20:16.540 "uuid": "f2a906af-ff79-4101-bd6a-42b351280232", 00:20:16.540 "is_configured": false, 00:20:16.540 "data_offset": 0, 00:20:16.540 "data_size": 65536 00:20:16.540 }, 00:20:16.540 { 00:20:16.540 "name": "BaseBdev3", 00:20:16.540 "uuid": "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1", 00:20:16.540 "is_configured": true, 00:20:16.540 "data_offset": 0, 00:20:16.540 "data_size": 65536 00:20:16.540 } 00:20:16.540 ] 00:20:16.540 }' 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.540 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.107 [2024-11-27 04:42:04.616337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.107 "name": "Existed_Raid", 00:20:17.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.107 "strip_size_kb": 64, 00:20:17.107 "state": "configuring", 00:20:17.107 "raid_level": "raid5f", 00:20:17.107 "superblock": false, 00:20:17.107 "num_base_bdevs": 3, 00:20:17.107 "num_base_bdevs_discovered": 2, 00:20:17.107 "num_base_bdevs_operational": 3, 00:20:17.107 "base_bdevs_list": [ 00:20:17.107 { 00:20:17.107 "name": null, 00:20:17.107 "uuid": "d4cc4e86-76ba-4dbe-b14c-5a1363a43721", 00:20:17.107 "is_configured": false, 00:20:17.107 "data_offset": 0, 00:20:17.107 "data_size": 65536 00:20:17.107 }, 00:20:17.107 { 00:20:17.107 "name": "BaseBdev2", 00:20:17.107 "uuid": "f2a906af-ff79-4101-bd6a-42b351280232", 00:20:17.107 "is_configured": true, 00:20:17.107 "data_offset": 0, 00:20:17.107 "data_size": 65536 00:20:17.107 }, 00:20:17.107 { 00:20:17.107 "name": "BaseBdev3", 00:20:17.107 "uuid": "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1", 00:20:17.107 "is_configured": true, 00:20:17.107 "data_offset": 0, 00:20:17.107 "data_size": 65536 00:20:17.107 } 00:20:17.107 ] 00:20:17.107 }' 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.107 04:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d4cc4e86-76ba-4dbe-b14c-5a1363a43721 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.674 [2024-11-27 04:42:05.272110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:17.674 [2024-11-27 04:42:05.272208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:17.674 [2024-11-27 04:42:05.272230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:17.674 [2024-11-27 04:42:05.272602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:17.674 [2024-11-27 04:42:05.277865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:17.674 [2024-11-27 04:42:05.277901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:17.674 [2024-11-27 04:42:05.278377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.674 NewBaseBdev 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.674 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.932 [ 00:20:17.932 { 00:20:17.932 "name": "NewBaseBdev", 00:20:17.932 "aliases": [ 00:20:17.932 "d4cc4e86-76ba-4dbe-b14c-5a1363a43721" 00:20:17.932 ], 00:20:17.932 "product_name": "Malloc disk", 00:20:17.932 "block_size": 512, 00:20:17.932 "num_blocks": 65536, 00:20:17.932 "uuid": "d4cc4e86-76ba-4dbe-b14c-5a1363a43721", 00:20:17.932 "assigned_rate_limits": { 00:20:17.932 "rw_ios_per_sec": 0, 00:20:17.932 "rw_mbytes_per_sec": 0, 00:20:17.932 "r_mbytes_per_sec": 0, 00:20:17.932 "w_mbytes_per_sec": 0 00:20:17.932 }, 00:20:17.932 "claimed": true, 00:20:17.932 "claim_type": "exclusive_write", 00:20:17.932 "zoned": false, 00:20:17.932 "supported_io_types": { 00:20:17.932 "read": true, 00:20:17.932 "write": true, 00:20:17.932 "unmap": true, 00:20:17.932 "flush": true, 00:20:17.932 "reset": true, 00:20:17.932 "nvme_admin": false, 00:20:17.932 "nvme_io": false, 00:20:17.932 "nvme_io_md": false, 00:20:17.932 "write_zeroes": true, 00:20:17.932 "zcopy": true, 00:20:17.932 "get_zone_info": false, 00:20:17.932 "zone_management": false, 00:20:17.932 "zone_append": false, 00:20:17.932 "compare": false, 00:20:17.932 "compare_and_write": false, 00:20:17.932 "abort": true, 00:20:17.932 "seek_hole": false, 00:20:17.932 "seek_data": false, 00:20:17.932 "copy": true, 00:20:17.932 "nvme_iov_md": false 00:20:17.932 }, 00:20:17.932 "memory_domains": [ 00:20:17.932 { 00:20:17.932 "dma_device_id": "system", 00:20:17.932 "dma_device_type": 1 00:20:17.932 }, 00:20:17.932 { 00:20:17.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.932 "dma_device_type": 2 00:20:17.932 } 00:20:17.932 ], 00:20:17.932 "driver_specific": {} 00:20:17.932 } 00:20:17.932 ] 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.932 "name": "Existed_Raid", 00:20:17.932 "uuid": "03dd7346-cd62-4782-a054-bc90177e966c", 00:20:17.932 "strip_size_kb": 64, 00:20:17.932 "state": "online", 00:20:17.932 "raid_level": "raid5f", 00:20:17.932 "superblock": false, 00:20:17.932 "num_base_bdevs": 3, 00:20:17.932 "num_base_bdevs_discovered": 3, 00:20:17.932 "num_base_bdevs_operational": 3, 00:20:17.932 "base_bdevs_list": [ 00:20:17.932 { 00:20:17.932 "name": "NewBaseBdev", 00:20:17.932 "uuid": "d4cc4e86-76ba-4dbe-b14c-5a1363a43721", 00:20:17.932 "is_configured": true, 00:20:17.932 "data_offset": 0, 00:20:17.932 "data_size": 65536 00:20:17.932 }, 00:20:17.932 { 00:20:17.932 "name": "BaseBdev2", 00:20:17.932 "uuid": "f2a906af-ff79-4101-bd6a-42b351280232", 00:20:17.932 "is_configured": true, 00:20:17.932 "data_offset": 0, 00:20:17.932 "data_size": 65536 00:20:17.932 }, 00:20:17.932 { 00:20:17.932 "name": "BaseBdev3", 00:20:17.932 "uuid": "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1", 00:20:17.932 "is_configured": true, 00:20:17.932 "data_offset": 0, 00:20:17.932 "data_size": 65536 00:20:17.932 } 00:20:17.932 ] 00:20:17.932 }' 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.932 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.191 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:18.191 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:18.191 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:18.191 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:18.191 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:18.191 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:18.191 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:18.191 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:18.191 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.191 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.191 [2024-11-27 04:42:05.809055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:18.449 "name": "Existed_Raid", 00:20:18.449 "aliases": [ 00:20:18.449 "03dd7346-cd62-4782-a054-bc90177e966c" 00:20:18.449 ], 00:20:18.449 "product_name": "Raid Volume", 00:20:18.449 "block_size": 512, 00:20:18.449 "num_blocks": 131072, 00:20:18.449 "uuid": "03dd7346-cd62-4782-a054-bc90177e966c", 00:20:18.449 "assigned_rate_limits": { 00:20:18.449 "rw_ios_per_sec": 0, 00:20:18.449 "rw_mbytes_per_sec": 0, 00:20:18.449 "r_mbytes_per_sec": 0, 00:20:18.449 "w_mbytes_per_sec": 0 00:20:18.449 }, 00:20:18.449 "claimed": false, 00:20:18.449 "zoned": false, 00:20:18.449 "supported_io_types": { 00:20:18.449 "read": true, 00:20:18.449 "write": true, 00:20:18.449 "unmap": false, 00:20:18.449 "flush": false, 00:20:18.449 "reset": true, 00:20:18.449 "nvme_admin": false, 00:20:18.449 "nvme_io": false, 00:20:18.449 "nvme_io_md": false, 00:20:18.449 "write_zeroes": true, 00:20:18.449 "zcopy": false, 00:20:18.449 "get_zone_info": false, 00:20:18.449 "zone_management": false, 00:20:18.449 "zone_append": false, 00:20:18.449 "compare": false, 00:20:18.449 "compare_and_write": false, 00:20:18.449 "abort": false, 00:20:18.449 "seek_hole": false, 00:20:18.449 "seek_data": false, 00:20:18.449 "copy": false, 00:20:18.449 "nvme_iov_md": false 00:20:18.449 }, 00:20:18.449 "driver_specific": { 00:20:18.449 "raid": { 00:20:18.449 "uuid": "03dd7346-cd62-4782-a054-bc90177e966c", 00:20:18.449 "strip_size_kb": 64, 00:20:18.449 "state": "online", 00:20:18.449 "raid_level": "raid5f", 00:20:18.449 "superblock": false, 00:20:18.449 "num_base_bdevs": 3, 00:20:18.449 "num_base_bdevs_discovered": 3, 00:20:18.449 "num_base_bdevs_operational": 3, 00:20:18.449 "base_bdevs_list": [ 00:20:18.449 { 00:20:18.449 "name": "NewBaseBdev", 00:20:18.449 "uuid": "d4cc4e86-76ba-4dbe-b14c-5a1363a43721", 00:20:18.449 "is_configured": true, 00:20:18.449 "data_offset": 0, 00:20:18.449 "data_size": 65536 00:20:18.449 }, 00:20:18.449 { 00:20:18.449 "name": "BaseBdev2", 00:20:18.449 "uuid": "f2a906af-ff79-4101-bd6a-42b351280232", 00:20:18.449 "is_configured": true, 00:20:18.449 "data_offset": 0, 00:20:18.449 "data_size": 65536 00:20:18.449 }, 00:20:18.449 { 00:20:18.449 "name": "BaseBdev3", 00:20:18.449 "uuid": "9d5e98d2-b48c-45c8-8243-5594bf5f8fc1", 00:20:18.449 "is_configured": true, 00:20:18.449 "data_offset": 0, 00:20:18.449 "data_size": 65536 00:20:18.449 } 00:20:18.449 ] 00:20:18.449 } 00:20:18.449 } 00:20:18.449 }' 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:18.449 BaseBdev2 00:20:18.449 BaseBdev3' 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.449 04:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.449 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.449 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.449 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.449 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:18.450 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.450 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.450 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.450 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.707 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.707 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.707 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.707 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:18.707 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.708 [2024-11-27 04:42:06.132913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:18.708 [2024-11-27 04:42:06.132987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:18.708 [2024-11-27 04:42:06.133121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.708 [2024-11-27 04:42:06.133532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.708 [2024-11-27 04:42:06.133573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80373 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80373 ']' 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80373 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80373 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:18.708 killing process with pid 80373 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80373' 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80373 00:20:18.708 [2024-11-27 04:42:06.171711] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:18.708 04:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80373 00:20:18.966 [2024-11-27 04:42:06.469124] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:20.339 00:20:20.339 real 0m12.189s 00:20:20.339 user 0m19.941s 00:20:20.339 sys 0m1.749s 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.339 ************************************ 00:20:20.339 END TEST raid5f_state_function_test 00:20:20.339 ************************************ 00:20:20.339 04:42:07 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:20:20.339 04:42:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:20.339 04:42:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.339 04:42:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:20.339 ************************************ 00:20:20.339 START TEST raid5f_state_function_test_sb 00:20:20.339 ************************************ 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:20.339 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81013 00:20:20.340 Process raid pid: 81013 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81013' 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81013 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81013 ']' 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.340 04:42:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.340 [2024-11-27 04:42:07.830400] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:20.340 [2024-11-27 04:42:07.830618] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.598 [2024-11-27 04:42:08.016988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.598 [2024-11-27 04:42:08.166518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.857 [2024-11-27 04:42:08.395649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.857 [2024-11-27 04:42:08.395756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.423 04:42:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.423 04:42:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:21.423 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:21.423 04:42:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.423 04:42:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.423 [2024-11-27 04:42:08.873385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:21.423 [2024-11-27 04:42:08.873493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:21.423 [2024-11-27 04:42:08.873513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:21.423 [2024-11-27 04:42:08.873533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:21.423 [2024-11-27 04:42:08.873545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:21.423 [2024-11-27 04:42:08.873562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:21.423 04:42:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.424 "name": "Existed_Raid", 00:20:21.424 "uuid": "827c5c58-f30b-4f8f-aadd-67cb76d3587d", 00:20:21.424 "strip_size_kb": 64, 00:20:21.424 "state": "configuring", 00:20:21.424 "raid_level": "raid5f", 00:20:21.424 "superblock": true, 00:20:21.424 "num_base_bdevs": 3, 00:20:21.424 "num_base_bdevs_discovered": 0, 00:20:21.424 "num_base_bdevs_operational": 3, 00:20:21.424 "base_bdevs_list": [ 00:20:21.424 { 00:20:21.424 "name": "BaseBdev1", 00:20:21.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.424 "is_configured": false, 00:20:21.424 "data_offset": 0, 00:20:21.424 "data_size": 0 00:20:21.424 }, 00:20:21.424 { 00:20:21.424 "name": "BaseBdev2", 00:20:21.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.424 "is_configured": false, 00:20:21.424 "data_offset": 0, 00:20:21.424 "data_size": 0 00:20:21.424 }, 00:20:21.424 { 00:20:21.424 "name": "BaseBdev3", 00:20:21.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.424 "is_configured": false, 00:20:21.424 "data_offset": 0, 00:20:21.424 "data_size": 0 00:20:21.424 } 00:20:21.424 ] 00:20:21.424 }' 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.424 04:42:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.991 [2024-11-27 04:42:09.385487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:21.991 [2024-11-27 04:42:09.385578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.991 [2024-11-27 04:42:09.393414] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:21.991 [2024-11-27 04:42:09.393483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:21.991 [2024-11-27 04:42:09.393501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:21.991 [2024-11-27 04:42:09.393519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:21.991 [2024-11-27 04:42:09.393530] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:21.991 [2024-11-27 04:42:09.393548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.991 [2024-11-27 04:42:09.440588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:21.991 BaseBdev1 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.991 [ 00:20:21.991 { 00:20:21.991 "name": "BaseBdev1", 00:20:21.991 "aliases": [ 00:20:21.991 "011e6b9a-8037-4045-b2dc-6f2266f9099e" 00:20:21.991 ], 00:20:21.991 "product_name": "Malloc disk", 00:20:21.991 "block_size": 512, 00:20:21.991 "num_blocks": 65536, 00:20:21.991 "uuid": "011e6b9a-8037-4045-b2dc-6f2266f9099e", 00:20:21.991 "assigned_rate_limits": { 00:20:21.991 "rw_ios_per_sec": 0, 00:20:21.991 "rw_mbytes_per_sec": 0, 00:20:21.991 "r_mbytes_per_sec": 0, 00:20:21.991 "w_mbytes_per_sec": 0 00:20:21.991 }, 00:20:21.991 "claimed": true, 00:20:21.991 "claim_type": "exclusive_write", 00:20:21.991 "zoned": false, 00:20:21.991 "supported_io_types": { 00:20:21.991 "read": true, 00:20:21.991 "write": true, 00:20:21.991 "unmap": true, 00:20:21.991 "flush": true, 00:20:21.991 "reset": true, 00:20:21.991 "nvme_admin": false, 00:20:21.991 "nvme_io": false, 00:20:21.991 "nvme_io_md": false, 00:20:21.991 "write_zeroes": true, 00:20:21.991 "zcopy": true, 00:20:21.991 "get_zone_info": false, 00:20:21.991 "zone_management": false, 00:20:21.991 "zone_append": false, 00:20:21.991 "compare": false, 00:20:21.991 "compare_and_write": false, 00:20:21.991 "abort": true, 00:20:21.991 "seek_hole": false, 00:20:21.991 "seek_data": false, 00:20:21.991 "copy": true, 00:20:21.991 "nvme_iov_md": false 00:20:21.991 }, 00:20:21.991 "memory_domains": [ 00:20:21.991 { 00:20:21.991 "dma_device_id": "system", 00:20:21.991 "dma_device_type": 1 00:20:21.991 }, 00:20:21.991 { 00:20:21.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.991 "dma_device_type": 2 00:20:21.991 } 00:20:21.991 ], 00:20:21.991 "driver_specific": {} 00:20:21.991 } 00:20:21.991 ] 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.991 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.991 "name": "Existed_Raid", 00:20:21.991 "uuid": "862d1356-9b80-473e-b2d4-418f07cfccd1", 00:20:21.991 "strip_size_kb": 64, 00:20:21.991 "state": "configuring", 00:20:21.991 "raid_level": "raid5f", 00:20:21.991 "superblock": true, 00:20:21.991 "num_base_bdevs": 3, 00:20:21.991 "num_base_bdevs_discovered": 1, 00:20:21.991 "num_base_bdevs_operational": 3, 00:20:21.991 "base_bdevs_list": [ 00:20:21.991 { 00:20:21.991 "name": "BaseBdev1", 00:20:21.991 "uuid": "011e6b9a-8037-4045-b2dc-6f2266f9099e", 00:20:21.991 "is_configured": true, 00:20:21.991 "data_offset": 2048, 00:20:21.991 "data_size": 63488 00:20:21.991 }, 00:20:21.991 { 00:20:21.992 "name": "BaseBdev2", 00:20:21.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.992 "is_configured": false, 00:20:21.992 "data_offset": 0, 00:20:21.992 "data_size": 0 00:20:21.992 }, 00:20:21.992 { 00:20:21.992 "name": "BaseBdev3", 00:20:21.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.992 "is_configured": false, 00:20:21.992 "data_offset": 0, 00:20:21.992 "data_size": 0 00:20:21.992 } 00:20:21.992 ] 00:20:21.992 }' 00:20:21.992 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.992 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.559 [2024-11-27 04:42:09.968872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:22.559 [2024-11-27 04:42:09.968984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.559 [2024-11-27 04:42:09.976891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:22.559 [2024-11-27 04:42:09.979605] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:22.559 [2024-11-27 04:42:09.979672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:22.559 [2024-11-27 04:42:09.979693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:22.559 [2024-11-27 04:42:09.979713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.559 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.560 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.560 04:42:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.560 04:42:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.560 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.560 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.560 "name": "Existed_Raid", 00:20:22.560 "uuid": "8a337ce9-5920-4ffc-aa81-1b804b7af5c8", 00:20:22.560 "strip_size_kb": 64, 00:20:22.560 "state": "configuring", 00:20:22.560 "raid_level": "raid5f", 00:20:22.560 "superblock": true, 00:20:22.560 "num_base_bdevs": 3, 00:20:22.560 "num_base_bdevs_discovered": 1, 00:20:22.560 "num_base_bdevs_operational": 3, 00:20:22.560 "base_bdevs_list": [ 00:20:22.560 { 00:20:22.560 "name": "BaseBdev1", 00:20:22.560 "uuid": "011e6b9a-8037-4045-b2dc-6f2266f9099e", 00:20:22.560 "is_configured": true, 00:20:22.560 "data_offset": 2048, 00:20:22.560 "data_size": 63488 00:20:22.560 }, 00:20:22.560 { 00:20:22.560 "name": "BaseBdev2", 00:20:22.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.560 "is_configured": false, 00:20:22.560 "data_offset": 0, 00:20:22.560 "data_size": 0 00:20:22.560 }, 00:20:22.560 { 00:20:22.560 "name": "BaseBdev3", 00:20:22.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.560 "is_configured": false, 00:20:22.560 "data_offset": 0, 00:20:22.560 "data_size": 0 00:20:22.560 } 00:20:22.560 ] 00:20:22.560 }' 00:20:22.560 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.560 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.126 [2024-11-27 04:42:10.500753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.126 BaseBdev2 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.126 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.126 [ 00:20:23.126 { 00:20:23.126 "name": "BaseBdev2", 00:20:23.126 "aliases": [ 00:20:23.126 "762fc8c8-85ad-4484-80eb-4c77eca0ac90" 00:20:23.126 ], 00:20:23.126 "product_name": "Malloc disk", 00:20:23.126 "block_size": 512, 00:20:23.126 "num_blocks": 65536, 00:20:23.126 "uuid": "762fc8c8-85ad-4484-80eb-4c77eca0ac90", 00:20:23.126 "assigned_rate_limits": { 00:20:23.126 "rw_ios_per_sec": 0, 00:20:23.126 "rw_mbytes_per_sec": 0, 00:20:23.126 "r_mbytes_per_sec": 0, 00:20:23.126 "w_mbytes_per_sec": 0 00:20:23.126 }, 00:20:23.126 "claimed": true, 00:20:23.126 "claim_type": "exclusive_write", 00:20:23.126 "zoned": false, 00:20:23.126 "supported_io_types": { 00:20:23.126 "read": true, 00:20:23.126 "write": true, 00:20:23.126 "unmap": true, 00:20:23.126 "flush": true, 00:20:23.126 "reset": true, 00:20:23.127 "nvme_admin": false, 00:20:23.127 "nvme_io": false, 00:20:23.127 "nvme_io_md": false, 00:20:23.127 "write_zeroes": true, 00:20:23.127 "zcopy": true, 00:20:23.127 "get_zone_info": false, 00:20:23.127 "zone_management": false, 00:20:23.127 "zone_append": false, 00:20:23.127 "compare": false, 00:20:23.127 "compare_and_write": false, 00:20:23.127 "abort": true, 00:20:23.127 "seek_hole": false, 00:20:23.127 "seek_data": false, 00:20:23.127 "copy": true, 00:20:23.127 "nvme_iov_md": false 00:20:23.127 }, 00:20:23.127 "memory_domains": [ 00:20:23.127 { 00:20:23.127 "dma_device_id": "system", 00:20:23.127 "dma_device_type": 1 00:20:23.127 }, 00:20:23.127 { 00:20:23.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.127 "dma_device_type": 2 00:20:23.127 } 00:20:23.127 ], 00:20:23.127 "driver_specific": {} 00:20:23.127 } 00:20:23.127 ] 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.127 "name": "Existed_Raid", 00:20:23.127 "uuid": "8a337ce9-5920-4ffc-aa81-1b804b7af5c8", 00:20:23.127 "strip_size_kb": 64, 00:20:23.127 "state": "configuring", 00:20:23.127 "raid_level": "raid5f", 00:20:23.127 "superblock": true, 00:20:23.127 "num_base_bdevs": 3, 00:20:23.127 "num_base_bdevs_discovered": 2, 00:20:23.127 "num_base_bdevs_operational": 3, 00:20:23.127 "base_bdevs_list": [ 00:20:23.127 { 00:20:23.127 "name": "BaseBdev1", 00:20:23.127 "uuid": "011e6b9a-8037-4045-b2dc-6f2266f9099e", 00:20:23.127 "is_configured": true, 00:20:23.127 "data_offset": 2048, 00:20:23.127 "data_size": 63488 00:20:23.127 }, 00:20:23.127 { 00:20:23.127 "name": "BaseBdev2", 00:20:23.127 "uuid": "762fc8c8-85ad-4484-80eb-4c77eca0ac90", 00:20:23.127 "is_configured": true, 00:20:23.127 "data_offset": 2048, 00:20:23.127 "data_size": 63488 00:20:23.127 }, 00:20:23.127 { 00:20:23.127 "name": "BaseBdev3", 00:20:23.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.127 "is_configured": false, 00:20:23.127 "data_offset": 0, 00:20:23.127 "data_size": 0 00:20:23.127 } 00:20:23.127 ] 00:20:23.127 }' 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.127 04:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.694 [2024-11-27 04:42:11.083509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:23.694 [2024-11-27 04:42:11.083987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:23.694 [2024-11-27 04:42:11.084030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:23.694 BaseBdev3 00:20:23.694 [2024-11-27 04:42:11.084400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.694 [2024-11-27 04:42:11.090097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:23.694 [2024-11-27 04:42:11.090131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:23.694 [2024-11-27 04:42:11.090540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.694 [ 00:20:23.694 { 00:20:23.694 "name": "BaseBdev3", 00:20:23.694 "aliases": [ 00:20:23.694 "3a62ea06-5f31-4b95-9df1-315aa23f6cec" 00:20:23.694 ], 00:20:23.694 "product_name": "Malloc disk", 00:20:23.694 "block_size": 512, 00:20:23.694 "num_blocks": 65536, 00:20:23.694 "uuid": "3a62ea06-5f31-4b95-9df1-315aa23f6cec", 00:20:23.694 "assigned_rate_limits": { 00:20:23.694 "rw_ios_per_sec": 0, 00:20:23.694 "rw_mbytes_per_sec": 0, 00:20:23.694 "r_mbytes_per_sec": 0, 00:20:23.694 "w_mbytes_per_sec": 0 00:20:23.694 }, 00:20:23.694 "claimed": true, 00:20:23.694 "claim_type": "exclusive_write", 00:20:23.694 "zoned": false, 00:20:23.694 "supported_io_types": { 00:20:23.694 "read": true, 00:20:23.694 "write": true, 00:20:23.694 "unmap": true, 00:20:23.694 "flush": true, 00:20:23.694 "reset": true, 00:20:23.694 "nvme_admin": false, 00:20:23.694 "nvme_io": false, 00:20:23.694 "nvme_io_md": false, 00:20:23.694 "write_zeroes": true, 00:20:23.694 "zcopy": true, 00:20:23.694 "get_zone_info": false, 00:20:23.694 "zone_management": false, 00:20:23.694 "zone_append": false, 00:20:23.694 "compare": false, 00:20:23.694 "compare_and_write": false, 00:20:23.694 "abort": true, 00:20:23.694 "seek_hole": false, 00:20:23.694 "seek_data": false, 00:20:23.694 "copy": true, 00:20:23.694 "nvme_iov_md": false 00:20:23.694 }, 00:20:23.694 "memory_domains": [ 00:20:23.694 { 00:20:23.694 "dma_device_id": "system", 00:20:23.694 "dma_device_type": 1 00:20:23.694 }, 00:20:23.694 { 00:20:23.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.694 "dma_device_type": 2 00:20:23.694 } 00:20:23.694 ], 00:20:23.694 "driver_specific": {} 00:20:23.694 } 00:20:23.694 ] 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.694 "name": "Existed_Raid", 00:20:23.694 "uuid": "8a337ce9-5920-4ffc-aa81-1b804b7af5c8", 00:20:23.694 "strip_size_kb": 64, 00:20:23.694 "state": "online", 00:20:23.694 "raid_level": "raid5f", 00:20:23.694 "superblock": true, 00:20:23.694 "num_base_bdevs": 3, 00:20:23.694 "num_base_bdevs_discovered": 3, 00:20:23.694 "num_base_bdevs_operational": 3, 00:20:23.694 "base_bdevs_list": [ 00:20:23.694 { 00:20:23.694 "name": "BaseBdev1", 00:20:23.694 "uuid": "011e6b9a-8037-4045-b2dc-6f2266f9099e", 00:20:23.694 "is_configured": true, 00:20:23.694 "data_offset": 2048, 00:20:23.694 "data_size": 63488 00:20:23.694 }, 00:20:23.694 { 00:20:23.694 "name": "BaseBdev2", 00:20:23.694 "uuid": "762fc8c8-85ad-4484-80eb-4c77eca0ac90", 00:20:23.694 "is_configured": true, 00:20:23.694 "data_offset": 2048, 00:20:23.694 "data_size": 63488 00:20:23.694 }, 00:20:23.694 { 00:20:23.694 "name": "BaseBdev3", 00:20:23.694 "uuid": "3a62ea06-5f31-4b95-9df1-315aa23f6cec", 00:20:23.694 "is_configured": true, 00:20:23.694 "data_offset": 2048, 00:20:23.694 "data_size": 63488 00:20:23.694 } 00:20:23.694 ] 00:20:23.694 }' 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.694 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.321 [2024-11-27 04:42:11.629183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:24.321 "name": "Existed_Raid", 00:20:24.321 "aliases": [ 00:20:24.321 "8a337ce9-5920-4ffc-aa81-1b804b7af5c8" 00:20:24.321 ], 00:20:24.321 "product_name": "Raid Volume", 00:20:24.321 "block_size": 512, 00:20:24.321 "num_blocks": 126976, 00:20:24.321 "uuid": "8a337ce9-5920-4ffc-aa81-1b804b7af5c8", 00:20:24.321 "assigned_rate_limits": { 00:20:24.321 "rw_ios_per_sec": 0, 00:20:24.321 "rw_mbytes_per_sec": 0, 00:20:24.321 "r_mbytes_per_sec": 0, 00:20:24.321 "w_mbytes_per_sec": 0 00:20:24.321 }, 00:20:24.321 "claimed": false, 00:20:24.321 "zoned": false, 00:20:24.321 "supported_io_types": { 00:20:24.321 "read": true, 00:20:24.321 "write": true, 00:20:24.321 "unmap": false, 00:20:24.321 "flush": false, 00:20:24.321 "reset": true, 00:20:24.321 "nvme_admin": false, 00:20:24.321 "nvme_io": false, 00:20:24.321 "nvme_io_md": false, 00:20:24.321 "write_zeroes": true, 00:20:24.321 "zcopy": false, 00:20:24.321 "get_zone_info": false, 00:20:24.321 "zone_management": false, 00:20:24.321 "zone_append": false, 00:20:24.321 "compare": false, 00:20:24.321 "compare_and_write": false, 00:20:24.321 "abort": false, 00:20:24.321 "seek_hole": false, 00:20:24.321 "seek_data": false, 00:20:24.321 "copy": false, 00:20:24.321 "nvme_iov_md": false 00:20:24.321 }, 00:20:24.321 "driver_specific": { 00:20:24.321 "raid": { 00:20:24.321 "uuid": "8a337ce9-5920-4ffc-aa81-1b804b7af5c8", 00:20:24.321 "strip_size_kb": 64, 00:20:24.321 "state": "online", 00:20:24.321 "raid_level": "raid5f", 00:20:24.321 "superblock": true, 00:20:24.321 "num_base_bdevs": 3, 00:20:24.321 "num_base_bdevs_discovered": 3, 00:20:24.321 "num_base_bdevs_operational": 3, 00:20:24.321 "base_bdevs_list": [ 00:20:24.321 { 00:20:24.321 "name": "BaseBdev1", 00:20:24.321 "uuid": "011e6b9a-8037-4045-b2dc-6f2266f9099e", 00:20:24.321 "is_configured": true, 00:20:24.321 "data_offset": 2048, 00:20:24.321 "data_size": 63488 00:20:24.321 }, 00:20:24.321 { 00:20:24.321 "name": "BaseBdev2", 00:20:24.321 "uuid": "762fc8c8-85ad-4484-80eb-4c77eca0ac90", 00:20:24.321 "is_configured": true, 00:20:24.321 "data_offset": 2048, 00:20:24.321 "data_size": 63488 00:20:24.321 }, 00:20:24.321 { 00:20:24.321 "name": "BaseBdev3", 00:20:24.321 "uuid": "3a62ea06-5f31-4b95-9df1-315aa23f6cec", 00:20:24.321 "is_configured": true, 00:20:24.321 "data_offset": 2048, 00:20:24.321 "data_size": 63488 00:20:24.321 } 00:20:24.321 ] 00:20:24.321 } 00:20:24.321 } 00:20:24.321 }' 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:24.321 BaseBdev2 00:20:24.321 BaseBdev3' 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.321 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.322 04:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.322 [2024-11-27 04:42:11.937090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.580 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.581 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.581 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.581 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.581 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.581 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.581 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.581 "name": "Existed_Raid", 00:20:24.581 "uuid": "8a337ce9-5920-4ffc-aa81-1b804b7af5c8", 00:20:24.581 "strip_size_kb": 64, 00:20:24.581 "state": "online", 00:20:24.581 "raid_level": "raid5f", 00:20:24.581 "superblock": true, 00:20:24.581 "num_base_bdevs": 3, 00:20:24.581 "num_base_bdevs_discovered": 2, 00:20:24.581 "num_base_bdevs_operational": 2, 00:20:24.581 "base_bdevs_list": [ 00:20:24.581 { 00:20:24.581 "name": null, 00:20:24.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.581 "is_configured": false, 00:20:24.581 "data_offset": 0, 00:20:24.581 "data_size": 63488 00:20:24.581 }, 00:20:24.581 { 00:20:24.581 "name": "BaseBdev2", 00:20:24.581 "uuid": "762fc8c8-85ad-4484-80eb-4c77eca0ac90", 00:20:24.581 "is_configured": true, 00:20:24.581 "data_offset": 2048, 00:20:24.581 "data_size": 63488 00:20:24.581 }, 00:20:24.581 { 00:20:24.581 "name": "BaseBdev3", 00:20:24.581 "uuid": "3a62ea06-5f31-4b95-9df1-315aa23f6cec", 00:20:24.581 "is_configured": true, 00:20:24.581 "data_offset": 2048, 00:20:24.581 "data_size": 63488 00:20:24.581 } 00:20:24.581 ] 00:20:24.581 }' 00:20:24.581 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.581 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.148 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:25.148 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:25.148 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.148 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.148 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:25.148 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.148 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.148 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.149 [2024-11-27 04:42:12.619826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:25.149 [2024-11-27 04:42:12.620099] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:25.149 [2024-11-27 04:42:12.709059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.149 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.408 [2024-11-27 04:42:12.773072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:25.408 [2024-11-27 04:42:12.773140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.408 BaseBdev2 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.408 [ 00:20:25.408 { 00:20:25.408 "name": "BaseBdev2", 00:20:25.408 "aliases": [ 00:20:25.408 "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60" 00:20:25.408 ], 00:20:25.408 "product_name": "Malloc disk", 00:20:25.408 "block_size": 512, 00:20:25.408 "num_blocks": 65536, 00:20:25.408 "uuid": "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60", 00:20:25.408 "assigned_rate_limits": { 00:20:25.408 "rw_ios_per_sec": 0, 00:20:25.408 "rw_mbytes_per_sec": 0, 00:20:25.408 "r_mbytes_per_sec": 0, 00:20:25.408 "w_mbytes_per_sec": 0 00:20:25.408 }, 00:20:25.408 "claimed": false, 00:20:25.408 "zoned": false, 00:20:25.408 "supported_io_types": { 00:20:25.408 "read": true, 00:20:25.408 "write": true, 00:20:25.408 "unmap": true, 00:20:25.408 "flush": true, 00:20:25.408 "reset": true, 00:20:25.408 "nvme_admin": false, 00:20:25.408 "nvme_io": false, 00:20:25.408 "nvme_io_md": false, 00:20:25.408 "write_zeroes": true, 00:20:25.408 "zcopy": true, 00:20:25.408 "get_zone_info": false, 00:20:25.408 "zone_management": false, 00:20:25.408 "zone_append": false, 00:20:25.408 "compare": false, 00:20:25.408 "compare_and_write": false, 00:20:25.408 "abort": true, 00:20:25.408 "seek_hole": false, 00:20:25.408 "seek_data": false, 00:20:25.408 "copy": true, 00:20:25.408 "nvme_iov_md": false 00:20:25.408 }, 00:20:25.408 "memory_domains": [ 00:20:25.408 { 00:20:25.408 "dma_device_id": "system", 00:20:25.408 "dma_device_type": 1 00:20:25.408 }, 00:20:25.408 { 00:20:25.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.408 "dma_device_type": 2 00:20:25.408 } 00:20:25.408 ], 00:20:25.408 "driver_specific": {} 00:20:25.408 } 00:20:25.408 ] 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.408 04:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.667 BaseBdev3 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.667 [ 00:20:25.667 { 00:20:25.667 "name": "BaseBdev3", 00:20:25.667 "aliases": [ 00:20:25.667 "7a2c4774-8968-4da6-b37e-2b607089abdd" 00:20:25.667 ], 00:20:25.667 "product_name": "Malloc disk", 00:20:25.667 "block_size": 512, 00:20:25.667 "num_blocks": 65536, 00:20:25.667 "uuid": "7a2c4774-8968-4da6-b37e-2b607089abdd", 00:20:25.667 "assigned_rate_limits": { 00:20:25.667 "rw_ios_per_sec": 0, 00:20:25.667 "rw_mbytes_per_sec": 0, 00:20:25.667 "r_mbytes_per_sec": 0, 00:20:25.667 "w_mbytes_per_sec": 0 00:20:25.667 }, 00:20:25.667 "claimed": false, 00:20:25.667 "zoned": false, 00:20:25.667 "supported_io_types": { 00:20:25.667 "read": true, 00:20:25.667 "write": true, 00:20:25.667 "unmap": true, 00:20:25.667 "flush": true, 00:20:25.667 "reset": true, 00:20:25.667 "nvme_admin": false, 00:20:25.667 "nvme_io": false, 00:20:25.667 "nvme_io_md": false, 00:20:25.667 "write_zeroes": true, 00:20:25.667 "zcopy": true, 00:20:25.667 "get_zone_info": false, 00:20:25.667 "zone_management": false, 00:20:25.667 "zone_append": false, 00:20:25.667 "compare": false, 00:20:25.667 "compare_and_write": false, 00:20:25.667 "abort": true, 00:20:25.667 "seek_hole": false, 00:20:25.667 "seek_data": false, 00:20:25.667 "copy": true, 00:20:25.667 "nvme_iov_md": false 00:20:25.667 }, 00:20:25.667 "memory_domains": [ 00:20:25.667 { 00:20:25.667 "dma_device_id": "system", 00:20:25.667 "dma_device_type": 1 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.667 "dma_device_type": 2 00:20:25.667 } 00:20:25.667 ], 00:20:25.667 "driver_specific": {} 00:20:25.667 } 00:20:25.667 ] 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.667 [2024-11-27 04:42:13.076885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:25.667 [2024-11-27 04:42:13.076956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:25.667 [2024-11-27 04:42:13.076987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:25.667 [2024-11-27 04:42:13.079402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:25.667 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.668 "name": "Existed_Raid", 00:20:25.668 "uuid": "915184a1-db19-4dec-a938-57b25331f479", 00:20:25.668 "strip_size_kb": 64, 00:20:25.668 "state": "configuring", 00:20:25.668 "raid_level": "raid5f", 00:20:25.668 "superblock": true, 00:20:25.668 "num_base_bdevs": 3, 00:20:25.668 "num_base_bdevs_discovered": 2, 00:20:25.668 "num_base_bdevs_operational": 3, 00:20:25.668 "base_bdevs_list": [ 00:20:25.668 { 00:20:25.668 "name": "BaseBdev1", 00:20:25.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.668 "is_configured": false, 00:20:25.668 "data_offset": 0, 00:20:25.668 "data_size": 0 00:20:25.668 }, 00:20:25.668 { 00:20:25.668 "name": "BaseBdev2", 00:20:25.668 "uuid": "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60", 00:20:25.668 "is_configured": true, 00:20:25.668 "data_offset": 2048, 00:20:25.668 "data_size": 63488 00:20:25.668 }, 00:20:25.668 { 00:20:25.668 "name": "BaseBdev3", 00:20:25.668 "uuid": "7a2c4774-8968-4da6-b37e-2b607089abdd", 00:20:25.668 "is_configured": true, 00:20:25.668 "data_offset": 2048, 00:20:25.668 "data_size": 63488 00:20:25.668 } 00:20:25.668 ] 00:20:25.668 }' 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.668 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.234 [2024-11-27 04:42:13.601137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.234 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.234 "name": "Existed_Raid", 00:20:26.234 "uuid": "915184a1-db19-4dec-a938-57b25331f479", 00:20:26.234 "strip_size_kb": 64, 00:20:26.234 "state": "configuring", 00:20:26.234 "raid_level": "raid5f", 00:20:26.234 "superblock": true, 00:20:26.234 "num_base_bdevs": 3, 00:20:26.234 "num_base_bdevs_discovered": 1, 00:20:26.234 "num_base_bdevs_operational": 3, 00:20:26.234 "base_bdevs_list": [ 00:20:26.234 { 00:20:26.234 "name": "BaseBdev1", 00:20:26.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.234 "is_configured": false, 00:20:26.234 "data_offset": 0, 00:20:26.234 "data_size": 0 00:20:26.234 }, 00:20:26.234 { 00:20:26.234 "name": null, 00:20:26.234 "uuid": "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60", 00:20:26.234 "is_configured": false, 00:20:26.234 "data_offset": 0, 00:20:26.234 "data_size": 63488 00:20:26.234 }, 00:20:26.235 { 00:20:26.235 "name": "BaseBdev3", 00:20:26.235 "uuid": "7a2c4774-8968-4da6-b37e-2b607089abdd", 00:20:26.235 "is_configured": true, 00:20:26.235 "data_offset": 2048, 00:20:26.235 "data_size": 63488 00:20:26.235 } 00:20:26.235 ] 00:20:26.235 }' 00:20:26.235 04:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.235 04:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.801 [2024-11-27 04:42:14.224450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:26.801 BaseBdev1 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.801 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.801 [ 00:20:26.801 { 00:20:26.801 "name": "BaseBdev1", 00:20:26.801 "aliases": [ 00:20:26.801 "8e0f8889-da20-40a3-b7c1-d2be530707b5" 00:20:26.801 ], 00:20:26.801 "product_name": "Malloc disk", 00:20:26.801 "block_size": 512, 00:20:26.801 "num_blocks": 65536, 00:20:26.801 "uuid": "8e0f8889-da20-40a3-b7c1-d2be530707b5", 00:20:26.801 "assigned_rate_limits": { 00:20:26.801 "rw_ios_per_sec": 0, 00:20:26.801 "rw_mbytes_per_sec": 0, 00:20:26.801 "r_mbytes_per_sec": 0, 00:20:26.802 "w_mbytes_per_sec": 0 00:20:26.802 }, 00:20:26.802 "claimed": true, 00:20:26.802 "claim_type": "exclusive_write", 00:20:26.802 "zoned": false, 00:20:26.802 "supported_io_types": { 00:20:26.802 "read": true, 00:20:26.802 "write": true, 00:20:26.802 "unmap": true, 00:20:26.802 "flush": true, 00:20:26.802 "reset": true, 00:20:26.802 "nvme_admin": false, 00:20:26.802 "nvme_io": false, 00:20:26.802 "nvme_io_md": false, 00:20:26.802 "write_zeroes": true, 00:20:26.802 "zcopy": true, 00:20:26.802 "get_zone_info": false, 00:20:26.802 "zone_management": false, 00:20:26.802 "zone_append": false, 00:20:26.802 "compare": false, 00:20:26.802 "compare_and_write": false, 00:20:26.802 "abort": true, 00:20:26.802 "seek_hole": false, 00:20:26.802 "seek_data": false, 00:20:26.802 "copy": true, 00:20:26.802 "nvme_iov_md": false 00:20:26.802 }, 00:20:26.802 "memory_domains": [ 00:20:26.802 { 00:20:26.802 "dma_device_id": "system", 00:20:26.802 "dma_device_type": 1 00:20:26.802 }, 00:20:26.802 { 00:20:26.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.802 "dma_device_type": 2 00:20:26.802 } 00:20:26.802 ], 00:20:26.802 "driver_specific": {} 00:20:26.802 } 00:20:26.802 ] 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.802 "name": "Existed_Raid", 00:20:26.802 "uuid": "915184a1-db19-4dec-a938-57b25331f479", 00:20:26.802 "strip_size_kb": 64, 00:20:26.802 "state": "configuring", 00:20:26.802 "raid_level": "raid5f", 00:20:26.802 "superblock": true, 00:20:26.802 "num_base_bdevs": 3, 00:20:26.802 "num_base_bdevs_discovered": 2, 00:20:26.802 "num_base_bdevs_operational": 3, 00:20:26.802 "base_bdevs_list": [ 00:20:26.802 { 00:20:26.802 "name": "BaseBdev1", 00:20:26.802 "uuid": "8e0f8889-da20-40a3-b7c1-d2be530707b5", 00:20:26.802 "is_configured": true, 00:20:26.802 "data_offset": 2048, 00:20:26.802 "data_size": 63488 00:20:26.802 }, 00:20:26.802 { 00:20:26.802 "name": null, 00:20:26.802 "uuid": "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60", 00:20:26.802 "is_configured": false, 00:20:26.802 "data_offset": 0, 00:20:26.802 "data_size": 63488 00:20:26.802 }, 00:20:26.802 { 00:20:26.802 "name": "BaseBdev3", 00:20:26.802 "uuid": "7a2c4774-8968-4da6-b37e-2b607089abdd", 00:20:26.802 "is_configured": true, 00:20:26.802 "data_offset": 2048, 00:20:26.802 "data_size": 63488 00:20:26.802 } 00:20:26.802 ] 00:20:26.802 }' 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.802 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.369 [2024-11-27 04:42:14.836631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.369 "name": "Existed_Raid", 00:20:27.369 "uuid": "915184a1-db19-4dec-a938-57b25331f479", 00:20:27.369 "strip_size_kb": 64, 00:20:27.369 "state": "configuring", 00:20:27.369 "raid_level": "raid5f", 00:20:27.369 "superblock": true, 00:20:27.369 "num_base_bdevs": 3, 00:20:27.369 "num_base_bdevs_discovered": 1, 00:20:27.369 "num_base_bdevs_operational": 3, 00:20:27.369 "base_bdevs_list": [ 00:20:27.369 { 00:20:27.369 "name": "BaseBdev1", 00:20:27.369 "uuid": "8e0f8889-da20-40a3-b7c1-d2be530707b5", 00:20:27.369 "is_configured": true, 00:20:27.369 "data_offset": 2048, 00:20:27.369 "data_size": 63488 00:20:27.369 }, 00:20:27.369 { 00:20:27.369 "name": null, 00:20:27.369 "uuid": "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60", 00:20:27.369 "is_configured": false, 00:20:27.369 "data_offset": 0, 00:20:27.369 "data_size": 63488 00:20:27.369 }, 00:20:27.369 { 00:20:27.369 "name": null, 00:20:27.369 "uuid": "7a2c4774-8968-4da6-b37e-2b607089abdd", 00:20:27.369 "is_configured": false, 00:20:27.369 "data_offset": 0, 00:20:27.369 "data_size": 63488 00:20:27.369 } 00:20:27.369 ] 00:20:27.369 }' 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.369 04:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.937 [2024-11-27 04:42:15.416847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.937 "name": "Existed_Raid", 00:20:27.937 "uuid": "915184a1-db19-4dec-a938-57b25331f479", 00:20:27.937 "strip_size_kb": 64, 00:20:27.937 "state": "configuring", 00:20:27.937 "raid_level": "raid5f", 00:20:27.937 "superblock": true, 00:20:27.937 "num_base_bdevs": 3, 00:20:27.937 "num_base_bdevs_discovered": 2, 00:20:27.937 "num_base_bdevs_operational": 3, 00:20:27.937 "base_bdevs_list": [ 00:20:27.937 { 00:20:27.937 "name": "BaseBdev1", 00:20:27.937 "uuid": "8e0f8889-da20-40a3-b7c1-d2be530707b5", 00:20:27.937 "is_configured": true, 00:20:27.937 "data_offset": 2048, 00:20:27.937 "data_size": 63488 00:20:27.937 }, 00:20:27.937 { 00:20:27.937 "name": null, 00:20:27.937 "uuid": "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60", 00:20:27.937 "is_configured": false, 00:20:27.937 "data_offset": 0, 00:20:27.937 "data_size": 63488 00:20:27.937 }, 00:20:27.937 { 00:20:27.937 "name": "BaseBdev3", 00:20:27.937 "uuid": "7a2c4774-8968-4da6-b37e-2b607089abdd", 00:20:27.937 "is_configured": true, 00:20:27.937 "data_offset": 2048, 00:20:27.937 "data_size": 63488 00:20:27.937 } 00:20:27.937 ] 00:20:27.937 }' 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.937 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.505 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.505 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.505 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.505 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:28.505 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.505 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:28.505 04:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:28.505 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.505 04:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.505 [2024-11-27 04:42:16.001110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.505 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.764 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.764 "name": "Existed_Raid", 00:20:28.764 "uuid": "915184a1-db19-4dec-a938-57b25331f479", 00:20:28.764 "strip_size_kb": 64, 00:20:28.764 "state": "configuring", 00:20:28.764 "raid_level": "raid5f", 00:20:28.764 "superblock": true, 00:20:28.764 "num_base_bdevs": 3, 00:20:28.764 "num_base_bdevs_discovered": 1, 00:20:28.764 "num_base_bdevs_operational": 3, 00:20:28.764 "base_bdevs_list": [ 00:20:28.764 { 00:20:28.764 "name": null, 00:20:28.764 "uuid": "8e0f8889-da20-40a3-b7c1-d2be530707b5", 00:20:28.764 "is_configured": false, 00:20:28.764 "data_offset": 0, 00:20:28.764 "data_size": 63488 00:20:28.764 }, 00:20:28.764 { 00:20:28.764 "name": null, 00:20:28.764 "uuid": "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60", 00:20:28.764 "is_configured": false, 00:20:28.764 "data_offset": 0, 00:20:28.764 "data_size": 63488 00:20:28.764 }, 00:20:28.764 { 00:20:28.764 "name": "BaseBdev3", 00:20:28.764 "uuid": "7a2c4774-8968-4da6-b37e-2b607089abdd", 00:20:28.764 "is_configured": true, 00:20:28.764 "data_offset": 2048, 00:20:28.764 "data_size": 63488 00:20:28.764 } 00:20:28.764 ] 00:20:28.764 }' 00:20:28.764 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.764 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.023 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.023 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:29.023 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.023 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.023 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.282 [2024-11-27 04:42:16.656257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.282 "name": "Existed_Raid", 00:20:29.282 "uuid": "915184a1-db19-4dec-a938-57b25331f479", 00:20:29.282 "strip_size_kb": 64, 00:20:29.282 "state": "configuring", 00:20:29.282 "raid_level": "raid5f", 00:20:29.282 "superblock": true, 00:20:29.282 "num_base_bdevs": 3, 00:20:29.282 "num_base_bdevs_discovered": 2, 00:20:29.282 "num_base_bdevs_operational": 3, 00:20:29.282 "base_bdevs_list": [ 00:20:29.282 { 00:20:29.282 "name": null, 00:20:29.282 "uuid": "8e0f8889-da20-40a3-b7c1-d2be530707b5", 00:20:29.282 "is_configured": false, 00:20:29.282 "data_offset": 0, 00:20:29.282 "data_size": 63488 00:20:29.282 }, 00:20:29.282 { 00:20:29.282 "name": "BaseBdev2", 00:20:29.282 "uuid": "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60", 00:20:29.282 "is_configured": true, 00:20:29.282 "data_offset": 2048, 00:20:29.282 "data_size": 63488 00:20:29.282 }, 00:20:29.282 { 00:20:29.282 "name": "BaseBdev3", 00:20:29.282 "uuid": "7a2c4774-8968-4da6-b37e-2b607089abdd", 00:20:29.282 "is_configured": true, 00:20:29.282 "data_offset": 2048, 00:20:29.282 "data_size": 63488 00:20:29.282 } 00:20:29.282 ] 00:20:29.282 }' 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.282 04:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8e0f8889-da20-40a3-b7c1-d2be530707b5 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 [2024-11-27 04:42:17.343195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:29.849 [2024-11-27 04:42:17.344650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:29.849 [2024-11-27 04:42:17.344683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:29.849 NewBaseBdev 00:20:29.849 [2024-11-27 04:42:17.345005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 [2024-11-27 04:42:17.349947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:29.849 [2024-11-27 04:42:17.349973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:29.849 [2024-11-27 04:42:17.350178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.849 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 [ 00:20:29.849 { 00:20:29.849 "name": "NewBaseBdev", 00:20:29.849 "aliases": [ 00:20:29.849 "8e0f8889-da20-40a3-b7c1-d2be530707b5" 00:20:29.849 ], 00:20:29.849 "product_name": "Malloc disk", 00:20:29.849 "block_size": 512, 00:20:29.849 "num_blocks": 65536, 00:20:29.850 "uuid": "8e0f8889-da20-40a3-b7c1-d2be530707b5", 00:20:29.850 "assigned_rate_limits": { 00:20:29.850 "rw_ios_per_sec": 0, 00:20:29.850 "rw_mbytes_per_sec": 0, 00:20:29.850 "r_mbytes_per_sec": 0, 00:20:29.850 "w_mbytes_per_sec": 0 00:20:29.850 }, 00:20:29.850 "claimed": true, 00:20:29.850 "claim_type": "exclusive_write", 00:20:29.850 "zoned": false, 00:20:29.850 "supported_io_types": { 00:20:29.850 "read": true, 00:20:29.850 "write": true, 00:20:29.850 "unmap": true, 00:20:29.850 "flush": true, 00:20:29.850 "reset": true, 00:20:29.850 "nvme_admin": false, 00:20:29.850 "nvme_io": false, 00:20:29.850 "nvme_io_md": false, 00:20:29.850 "write_zeroes": true, 00:20:29.850 "zcopy": true, 00:20:29.850 "get_zone_info": false, 00:20:29.850 "zone_management": false, 00:20:29.850 "zone_append": false, 00:20:29.850 "compare": false, 00:20:29.850 "compare_and_write": false, 00:20:29.850 "abort": true, 00:20:29.850 "seek_hole": false, 00:20:29.850 "seek_data": false, 00:20:29.850 "copy": true, 00:20:29.850 "nvme_iov_md": false 00:20:29.850 }, 00:20:29.850 "memory_domains": [ 00:20:29.850 { 00:20:29.850 "dma_device_id": "system", 00:20:29.850 "dma_device_type": 1 00:20:29.850 }, 00:20:29.850 { 00:20:29.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.850 "dma_device_type": 2 00:20:29.850 } 00:20:29.850 ], 00:20:29.850 "driver_specific": {} 00:20:29.850 } 00:20:29.850 ] 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.850 "name": "Existed_Raid", 00:20:29.850 "uuid": "915184a1-db19-4dec-a938-57b25331f479", 00:20:29.850 "strip_size_kb": 64, 00:20:29.850 "state": "online", 00:20:29.850 "raid_level": "raid5f", 00:20:29.850 "superblock": true, 00:20:29.850 "num_base_bdevs": 3, 00:20:29.850 "num_base_bdevs_discovered": 3, 00:20:29.850 "num_base_bdevs_operational": 3, 00:20:29.850 "base_bdevs_list": [ 00:20:29.850 { 00:20:29.850 "name": "NewBaseBdev", 00:20:29.850 "uuid": "8e0f8889-da20-40a3-b7c1-d2be530707b5", 00:20:29.850 "is_configured": true, 00:20:29.850 "data_offset": 2048, 00:20:29.850 "data_size": 63488 00:20:29.850 }, 00:20:29.850 { 00:20:29.850 "name": "BaseBdev2", 00:20:29.850 "uuid": "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60", 00:20:29.850 "is_configured": true, 00:20:29.850 "data_offset": 2048, 00:20:29.850 "data_size": 63488 00:20:29.850 }, 00:20:29.850 { 00:20:29.850 "name": "BaseBdev3", 00:20:29.850 "uuid": "7a2c4774-8968-4da6-b37e-2b607089abdd", 00:20:29.850 "is_configured": true, 00:20:29.850 "data_offset": 2048, 00:20:29.850 "data_size": 63488 00:20:29.850 } 00:20:29.850 ] 00:20:29.850 }' 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.850 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.418 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:30.418 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:30.418 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:30.418 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:30.418 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:30.418 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:30.418 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:30.418 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:30.418 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.418 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.419 [2024-11-27 04:42:17.924592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.419 04:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.419 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:30.419 "name": "Existed_Raid", 00:20:30.419 "aliases": [ 00:20:30.419 "915184a1-db19-4dec-a938-57b25331f479" 00:20:30.419 ], 00:20:30.419 "product_name": "Raid Volume", 00:20:30.419 "block_size": 512, 00:20:30.419 "num_blocks": 126976, 00:20:30.419 "uuid": "915184a1-db19-4dec-a938-57b25331f479", 00:20:30.419 "assigned_rate_limits": { 00:20:30.419 "rw_ios_per_sec": 0, 00:20:30.419 "rw_mbytes_per_sec": 0, 00:20:30.419 "r_mbytes_per_sec": 0, 00:20:30.419 "w_mbytes_per_sec": 0 00:20:30.419 }, 00:20:30.419 "claimed": false, 00:20:30.419 "zoned": false, 00:20:30.419 "supported_io_types": { 00:20:30.419 "read": true, 00:20:30.419 "write": true, 00:20:30.419 "unmap": false, 00:20:30.419 "flush": false, 00:20:30.419 "reset": true, 00:20:30.419 "nvme_admin": false, 00:20:30.419 "nvme_io": false, 00:20:30.419 "nvme_io_md": false, 00:20:30.419 "write_zeroes": true, 00:20:30.419 "zcopy": false, 00:20:30.419 "get_zone_info": false, 00:20:30.419 "zone_management": false, 00:20:30.419 "zone_append": false, 00:20:30.419 "compare": false, 00:20:30.419 "compare_and_write": false, 00:20:30.419 "abort": false, 00:20:30.419 "seek_hole": false, 00:20:30.419 "seek_data": false, 00:20:30.419 "copy": false, 00:20:30.419 "nvme_iov_md": false 00:20:30.419 }, 00:20:30.419 "driver_specific": { 00:20:30.419 "raid": { 00:20:30.419 "uuid": "915184a1-db19-4dec-a938-57b25331f479", 00:20:30.419 "strip_size_kb": 64, 00:20:30.419 "state": "online", 00:20:30.419 "raid_level": "raid5f", 00:20:30.419 "superblock": true, 00:20:30.419 "num_base_bdevs": 3, 00:20:30.419 "num_base_bdevs_discovered": 3, 00:20:30.419 "num_base_bdevs_operational": 3, 00:20:30.419 "base_bdevs_list": [ 00:20:30.419 { 00:20:30.419 "name": "NewBaseBdev", 00:20:30.419 "uuid": "8e0f8889-da20-40a3-b7c1-d2be530707b5", 00:20:30.419 "is_configured": true, 00:20:30.419 "data_offset": 2048, 00:20:30.419 "data_size": 63488 00:20:30.419 }, 00:20:30.419 { 00:20:30.419 "name": "BaseBdev2", 00:20:30.419 "uuid": "fc7154b3-bfc3-4f4e-8f28-b8a10d647f60", 00:20:30.419 "is_configured": true, 00:20:30.419 "data_offset": 2048, 00:20:30.419 "data_size": 63488 00:20:30.419 }, 00:20:30.419 { 00:20:30.419 "name": "BaseBdev3", 00:20:30.419 "uuid": "7a2c4774-8968-4da6-b37e-2b607089abdd", 00:20:30.419 "is_configured": true, 00:20:30.419 "data_offset": 2048, 00:20:30.419 "data_size": 63488 00:20:30.419 } 00:20:30.419 ] 00:20:30.419 } 00:20:30.419 } 00:20:30.419 }' 00:20:30.419 04:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:30.419 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:30.419 BaseBdev2 00:20:30.419 BaseBdev3' 00:20:30.419 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:30.679 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 [2024-11-27 04:42:18.276410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:30.680 [2024-11-27 04:42:18.276441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:30.680 [2024-11-27 04:42:18.276524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:30.680 [2024-11-27 04:42:18.276910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:30.680 [2024-11-27 04:42:18.276935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81013 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81013 ']' 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81013 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.680 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81013 00:20:30.938 killing process with pid 81013 00:20:30.938 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.938 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.939 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81013' 00:20:30.939 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81013 00:20:30.939 [2024-11-27 04:42:18.318575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:30.939 04:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81013 00:20:31.198 [2024-11-27 04:42:18.580884] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:32.134 04:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:32.134 00:20:32.134 real 0m11.910s 00:20:32.134 user 0m19.730s 00:20:32.134 sys 0m1.697s 00:20:32.134 04:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.134 ************************************ 00:20:32.134 END TEST raid5f_state_function_test_sb 00:20:32.134 ************************************ 00:20:32.134 04:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.134 04:42:19 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:20:32.134 04:42:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:32.134 04:42:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.134 04:42:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.134 ************************************ 00:20:32.134 START TEST raid5f_superblock_test 00:20:32.134 ************************************ 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81643 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81643 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:32.134 04:42:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81643 ']' 00:20:32.135 04:42:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.135 04:42:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.135 04:42:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.135 04:42:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.135 04:42:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.393 [2024-11-27 04:42:19.808055] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:32.394 [2024-11-27 04:42:19.808253] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81643 ] 00:20:32.394 [2024-11-27 04:42:19.998250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.652 [2024-11-27 04:42:20.150655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.911 [2024-11-27 04:42:20.351273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:32.911 [2024-11-27 04:42:20.351473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.170 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.429 malloc1 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.429 [2024-11-27 04:42:20.820561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:33.429 [2024-11-27 04:42:20.820635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.429 [2024-11-27 04:42:20.820668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:33.429 [2024-11-27 04:42:20.820684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.429 [2024-11-27 04:42:20.823625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.429 [2024-11-27 04:42:20.823671] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:33.429 pt1 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.429 malloc2 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.429 [2024-11-27 04:42:20.876916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:33.429 [2024-11-27 04:42:20.877000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.429 [2024-11-27 04:42:20.877038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:33.429 [2024-11-27 04:42:20.877052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.429 [2024-11-27 04:42:20.879989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.429 [2024-11-27 04:42:20.880032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:33.429 pt2 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.429 malloc3 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.429 [2024-11-27 04:42:20.945888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:33.429 [2024-11-27 04:42:20.945952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.429 [2024-11-27 04:42:20.945987] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:33.429 [2024-11-27 04:42:20.946003] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.429 [2024-11-27 04:42:20.948916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.429 [2024-11-27 04:42:20.948960] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:33.429 pt3 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.429 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.429 [2024-11-27 04:42:20.953987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:33.430 [2024-11-27 04:42:20.956517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:33.430 [2024-11-27 04:42:20.956608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:33.430 [2024-11-27 04:42:20.956857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:33.430 [2024-11-27 04:42:20.956886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:33.430 [2024-11-27 04:42:20.957209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:33.430 [2024-11-27 04:42:20.962500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:33.430 [2024-11-27 04:42:20.962524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:33.430 [2024-11-27 04:42:20.962754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.430 04:42:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.430 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.430 "name": "raid_bdev1", 00:20:33.430 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:33.430 "strip_size_kb": 64, 00:20:33.430 "state": "online", 00:20:33.430 "raid_level": "raid5f", 00:20:33.430 "superblock": true, 00:20:33.430 "num_base_bdevs": 3, 00:20:33.430 "num_base_bdevs_discovered": 3, 00:20:33.430 "num_base_bdevs_operational": 3, 00:20:33.430 "base_bdevs_list": [ 00:20:33.430 { 00:20:33.430 "name": "pt1", 00:20:33.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:33.430 "is_configured": true, 00:20:33.430 "data_offset": 2048, 00:20:33.430 "data_size": 63488 00:20:33.430 }, 00:20:33.430 { 00:20:33.430 "name": "pt2", 00:20:33.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:33.430 "is_configured": true, 00:20:33.430 "data_offset": 2048, 00:20:33.430 "data_size": 63488 00:20:33.430 }, 00:20:33.430 { 00:20:33.430 "name": "pt3", 00:20:33.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:33.430 "is_configured": true, 00:20:33.430 "data_offset": 2048, 00:20:33.430 "data_size": 63488 00:20:33.430 } 00:20:33.430 ] 00:20:33.430 }' 00:20:33.430 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.430 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.996 [2024-11-27 04:42:21.481169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.996 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:33.996 "name": "raid_bdev1", 00:20:33.996 "aliases": [ 00:20:33.996 "f900cb16-1a59-46ae-ace8-5c39adef7634" 00:20:33.996 ], 00:20:33.996 "product_name": "Raid Volume", 00:20:33.996 "block_size": 512, 00:20:33.996 "num_blocks": 126976, 00:20:33.996 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:33.996 "assigned_rate_limits": { 00:20:33.996 "rw_ios_per_sec": 0, 00:20:33.996 "rw_mbytes_per_sec": 0, 00:20:33.996 "r_mbytes_per_sec": 0, 00:20:33.996 "w_mbytes_per_sec": 0 00:20:33.996 }, 00:20:33.996 "claimed": false, 00:20:33.996 "zoned": false, 00:20:33.996 "supported_io_types": { 00:20:33.996 "read": true, 00:20:33.996 "write": true, 00:20:33.996 "unmap": false, 00:20:33.996 "flush": false, 00:20:33.996 "reset": true, 00:20:33.996 "nvme_admin": false, 00:20:33.996 "nvme_io": false, 00:20:33.996 "nvme_io_md": false, 00:20:33.996 "write_zeroes": true, 00:20:33.996 "zcopy": false, 00:20:33.996 "get_zone_info": false, 00:20:33.996 "zone_management": false, 00:20:33.996 "zone_append": false, 00:20:33.996 "compare": false, 00:20:33.996 "compare_and_write": false, 00:20:33.996 "abort": false, 00:20:33.996 "seek_hole": false, 00:20:33.996 "seek_data": false, 00:20:33.996 "copy": false, 00:20:33.996 "nvme_iov_md": false 00:20:33.996 }, 00:20:33.996 "driver_specific": { 00:20:33.996 "raid": { 00:20:33.996 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:33.996 "strip_size_kb": 64, 00:20:33.996 "state": "online", 00:20:33.996 "raid_level": "raid5f", 00:20:33.996 "superblock": true, 00:20:33.996 "num_base_bdevs": 3, 00:20:33.996 "num_base_bdevs_discovered": 3, 00:20:33.996 "num_base_bdevs_operational": 3, 00:20:33.996 "base_bdevs_list": [ 00:20:33.996 { 00:20:33.996 "name": "pt1", 00:20:33.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:33.997 "is_configured": true, 00:20:33.997 "data_offset": 2048, 00:20:33.997 "data_size": 63488 00:20:33.997 }, 00:20:33.997 { 00:20:33.997 "name": "pt2", 00:20:33.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:33.997 "is_configured": true, 00:20:33.997 "data_offset": 2048, 00:20:33.997 "data_size": 63488 00:20:33.997 }, 00:20:33.997 { 00:20:33.997 "name": "pt3", 00:20:33.997 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:33.997 "is_configured": true, 00:20:33.997 "data_offset": 2048, 00:20:33.997 "data_size": 63488 00:20:33.997 } 00:20:33.997 ] 00:20:33.997 } 00:20:33.997 } 00:20:33.997 }' 00:20:33.997 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:33.997 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:33.997 pt2 00:20:33.997 pt3' 00:20:33.997 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:34.256 [2024-11-27 04:42:21.805237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f900cb16-1a59-46ae-ace8-5c39adef7634 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f900cb16-1a59-46ae-ace8-5c39adef7634 ']' 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.256 [2024-11-27 04:42:21.844991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.256 [2024-11-27 04:42:21.845028] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.256 [2024-11-27 04:42:21.845119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.256 [2024-11-27 04:42:21.845287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.256 [2024-11-27 04:42:21.845304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.256 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.516 04:42:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.516 [2024-11-27 04:42:21.993095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:34.516 [2024-11-27 04:42:21.995645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:34.516 [2024-11-27 04:42:21.995720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:34.516 [2024-11-27 04:42:21.995795] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:34.516 [2024-11-27 04:42:21.995874] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:34.516 [2024-11-27 04:42:21.995908] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:34.516 [2024-11-27 04:42:21.995934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.516 [2024-11-27 04:42:21.995948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:34.516 request: 00:20:34.516 { 00:20:34.516 "name": "raid_bdev1", 00:20:34.516 "raid_level": "raid5f", 00:20:34.516 "base_bdevs": [ 00:20:34.516 "malloc1", 00:20:34.516 "malloc2", 00:20:34.516 "malloc3" 00:20:34.516 ], 00:20:34.516 "strip_size_kb": 64, 00:20:34.516 "superblock": false, 00:20:34.516 "method": "bdev_raid_create", 00:20:34.516 "req_id": 1 00:20:34.516 } 00:20:34.516 Got JSON-RPC error response 00:20:34.516 response: 00:20:34.516 { 00:20:34.516 "code": -17, 00:20:34.516 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:34.516 } 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.516 [2024-11-27 04:42:22.061050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:34.516 [2024-11-27 04:42:22.061238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.516 [2024-11-27 04:42:22.061321] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:34.516 [2024-11-27 04:42:22.061452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.516 [2024-11-27 04:42:22.064389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.516 [2024-11-27 04:42:22.064543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:34.516 [2024-11-27 04:42:22.064670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:34.516 [2024-11-27 04:42:22.064741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:34.516 pt1 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.516 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.516 "name": "raid_bdev1", 00:20:34.516 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:34.516 "strip_size_kb": 64, 00:20:34.516 "state": "configuring", 00:20:34.516 "raid_level": "raid5f", 00:20:34.516 "superblock": true, 00:20:34.516 "num_base_bdevs": 3, 00:20:34.516 "num_base_bdevs_discovered": 1, 00:20:34.516 "num_base_bdevs_operational": 3, 00:20:34.517 "base_bdevs_list": [ 00:20:34.517 { 00:20:34.517 "name": "pt1", 00:20:34.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:34.517 "is_configured": true, 00:20:34.517 "data_offset": 2048, 00:20:34.517 "data_size": 63488 00:20:34.517 }, 00:20:34.517 { 00:20:34.517 "name": null, 00:20:34.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:34.517 "is_configured": false, 00:20:34.517 "data_offset": 2048, 00:20:34.517 "data_size": 63488 00:20:34.517 }, 00:20:34.517 { 00:20:34.517 "name": null, 00:20:34.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:34.517 "is_configured": false, 00:20:34.517 "data_offset": 2048, 00:20:34.517 "data_size": 63488 00:20:34.517 } 00:20:34.517 ] 00:20:34.517 }' 00:20:34.517 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.517 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.084 [2024-11-27 04:42:22.597250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:35.084 [2024-11-27 04:42:22.597352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.084 [2024-11-27 04:42:22.597391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:35.084 [2024-11-27 04:42:22.597406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.084 [2024-11-27 04:42:22.598010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.084 [2024-11-27 04:42:22.598051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:35.084 [2024-11-27 04:42:22.598175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:35.084 [2024-11-27 04:42:22.598216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:35.084 pt2 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.084 [2024-11-27 04:42:22.605204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.084 "name": "raid_bdev1", 00:20:35.084 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:35.084 "strip_size_kb": 64, 00:20:35.084 "state": "configuring", 00:20:35.084 "raid_level": "raid5f", 00:20:35.084 "superblock": true, 00:20:35.084 "num_base_bdevs": 3, 00:20:35.084 "num_base_bdevs_discovered": 1, 00:20:35.084 "num_base_bdevs_operational": 3, 00:20:35.084 "base_bdevs_list": [ 00:20:35.084 { 00:20:35.084 "name": "pt1", 00:20:35.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.084 "is_configured": true, 00:20:35.084 "data_offset": 2048, 00:20:35.084 "data_size": 63488 00:20:35.084 }, 00:20:35.084 { 00:20:35.084 "name": null, 00:20:35.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.084 "is_configured": false, 00:20:35.084 "data_offset": 0, 00:20:35.084 "data_size": 63488 00:20:35.084 }, 00:20:35.084 { 00:20:35.084 "name": null, 00:20:35.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:35.084 "is_configured": false, 00:20:35.084 "data_offset": 2048, 00:20:35.084 "data_size": 63488 00:20:35.084 } 00:20:35.084 ] 00:20:35.084 }' 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.084 04:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.651 [2024-11-27 04:42:23.125394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:35.651 [2024-11-27 04:42:23.125481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.651 [2024-11-27 04:42:23.125509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:35.651 [2024-11-27 04:42:23.125527] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.651 [2024-11-27 04:42:23.126145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.651 [2024-11-27 04:42:23.126183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:35.651 [2024-11-27 04:42:23.126287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:35.651 [2024-11-27 04:42:23.126332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:35.651 pt2 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.651 [2024-11-27 04:42:23.133369] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:35.651 [2024-11-27 04:42:23.133427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.651 [2024-11-27 04:42:23.133449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:35.651 [2024-11-27 04:42:23.133466] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.651 [2024-11-27 04:42:23.133965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.651 [2024-11-27 04:42:23.134005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:35.651 [2024-11-27 04:42:23.134106] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:35.651 [2024-11-27 04:42:23.134145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:35.651 [2024-11-27 04:42:23.134310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:35.651 [2024-11-27 04:42:23.134333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:35.651 [2024-11-27 04:42:23.134637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:35.651 [2024-11-27 04:42:23.139951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:35.651 [2024-11-27 04:42:23.140091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:35.651 [2024-11-27 04:42:23.140471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.651 pt3 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.651 "name": "raid_bdev1", 00:20:35.651 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:35.651 "strip_size_kb": 64, 00:20:35.651 "state": "online", 00:20:35.651 "raid_level": "raid5f", 00:20:35.651 "superblock": true, 00:20:35.651 "num_base_bdevs": 3, 00:20:35.651 "num_base_bdevs_discovered": 3, 00:20:35.651 "num_base_bdevs_operational": 3, 00:20:35.651 "base_bdevs_list": [ 00:20:35.651 { 00:20:35.651 "name": "pt1", 00:20:35.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.651 "is_configured": true, 00:20:35.651 "data_offset": 2048, 00:20:35.651 "data_size": 63488 00:20:35.651 }, 00:20:35.651 { 00:20:35.651 "name": "pt2", 00:20:35.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.651 "is_configured": true, 00:20:35.651 "data_offset": 2048, 00:20:35.651 "data_size": 63488 00:20:35.651 }, 00:20:35.651 { 00:20:35.651 "name": "pt3", 00:20:35.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:35.651 "is_configured": true, 00:20:35.651 "data_offset": 2048, 00:20:35.651 "data_size": 63488 00:20:35.651 } 00:20:35.651 ] 00:20:35.651 }' 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.651 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.218 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.219 [2024-11-27 04:42:23.658561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:36.219 "name": "raid_bdev1", 00:20:36.219 "aliases": [ 00:20:36.219 "f900cb16-1a59-46ae-ace8-5c39adef7634" 00:20:36.219 ], 00:20:36.219 "product_name": "Raid Volume", 00:20:36.219 "block_size": 512, 00:20:36.219 "num_blocks": 126976, 00:20:36.219 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:36.219 "assigned_rate_limits": { 00:20:36.219 "rw_ios_per_sec": 0, 00:20:36.219 "rw_mbytes_per_sec": 0, 00:20:36.219 "r_mbytes_per_sec": 0, 00:20:36.219 "w_mbytes_per_sec": 0 00:20:36.219 }, 00:20:36.219 "claimed": false, 00:20:36.219 "zoned": false, 00:20:36.219 "supported_io_types": { 00:20:36.219 "read": true, 00:20:36.219 "write": true, 00:20:36.219 "unmap": false, 00:20:36.219 "flush": false, 00:20:36.219 "reset": true, 00:20:36.219 "nvme_admin": false, 00:20:36.219 "nvme_io": false, 00:20:36.219 "nvme_io_md": false, 00:20:36.219 "write_zeroes": true, 00:20:36.219 "zcopy": false, 00:20:36.219 "get_zone_info": false, 00:20:36.219 "zone_management": false, 00:20:36.219 "zone_append": false, 00:20:36.219 "compare": false, 00:20:36.219 "compare_and_write": false, 00:20:36.219 "abort": false, 00:20:36.219 "seek_hole": false, 00:20:36.219 "seek_data": false, 00:20:36.219 "copy": false, 00:20:36.219 "nvme_iov_md": false 00:20:36.219 }, 00:20:36.219 "driver_specific": { 00:20:36.219 "raid": { 00:20:36.219 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:36.219 "strip_size_kb": 64, 00:20:36.219 "state": "online", 00:20:36.219 "raid_level": "raid5f", 00:20:36.219 "superblock": true, 00:20:36.219 "num_base_bdevs": 3, 00:20:36.219 "num_base_bdevs_discovered": 3, 00:20:36.219 "num_base_bdevs_operational": 3, 00:20:36.219 "base_bdevs_list": [ 00:20:36.219 { 00:20:36.219 "name": "pt1", 00:20:36.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.219 "is_configured": true, 00:20:36.219 "data_offset": 2048, 00:20:36.219 "data_size": 63488 00:20:36.219 }, 00:20:36.219 { 00:20:36.219 "name": "pt2", 00:20:36.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.219 "is_configured": true, 00:20:36.219 "data_offset": 2048, 00:20:36.219 "data_size": 63488 00:20:36.219 }, 00:20:36.219 { 00:20:36.219 "name": "pt3", 00:20:36.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.219 "is_configured": true, 00:20:36.219 "data_offset": 2048, 00:20:36.219 "data_size": 63488 00:20:36.219 } 00:20:36.219 ] 00:20:36.219 } 00:20:36.219 } 00:20:36.219 }' 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:36.219 pt2 00:20:36.219 pt3' 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.219 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.478 04:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.478 [2024-11-27 04:42:23.982587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f900cb16-1a59-46ae-ace8-5c39adef7634 '!=' f900cb16-1a59-46ae-ace8-5c39adef7634 ']' 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.478 [2024-11-27 04:42:24.030422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.478 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.478 "name": "raid_bdev1", 00:20:36.478 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:36.478 "strip_size_kb": 64, 00:20:36.478 "state": "online", 00:20:36.478 "raid_level": "raid5f", 00:20:36.478 "superblock": true, 00:20:36.478 "num_base_bdevs": 3, 00:20:36.478 "num_base_bdevs_discovered": 2, 00:20:36.478 "num_base_bdevs_operational": 2, 00:20:36.478 "base_bdevs_list": [ 00:20:36.478 { 00:20:36.478 "name": null, 00:20:36.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.478 "is_configured": false, 00:20:36.478 "data_offset": 0, 00:20:36.478 "data_size": 63488 00:20:36.479 }, 00:20:36.479 { 00:20:36.479 "name": "pt2", 00:20:36.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.479 "is_configured": true, 00:20:36.479 "data_offset": 2048, 00:20:36.479 "data_size": 63488 00:20:36.479 }, 00:20:36.479 { 00:20:36.479 "name": "pt3", 00:20:36.479 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.479 "is_configured": true, 00:20:36.479 "data_offset": 2048, 00:20:36.479 "data_size": 63488 00:20:36.479 } 00:20:36.479 ] 00:20:36.479 }' 00:20:36.479 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.479 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.057 [2024-11-27 04:42:24.554544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.057 [2024-11-27 04:42:24.555698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.057 [2024-11-27 04:42:24.555837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.057 [2024-11-27 04:42:24.555920] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.057 [2024-11-27 04:42:24.555943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.057 [2024-11-27 04:42:24.638504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:37.057 [2024-11-27 04:42:24.638577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.057 [2024-11-27 04:42:24.638604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:37.057 [2024-11-27 04:42:24.638621] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.057 [2024-11-27 04:42:24.641486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.057 [2024-11-27 04:42:24.641537] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:37.057 [2024-11-27 04:42:24.641636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:37.057 [2024-11-27 04:42:24.641700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.057 pt2 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.057 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.316 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.316 "name": "raid_bdev1", 00:20:37.316 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:37.316 "strip_size_kb": 64, 00:20:37.316 "state": "configuring", 00:20:37.316 "raid_level": "raid5f", 00:20:37.316 "superblock": true, 00:20:37.316 "num_base_bdevs": 3, 00:20:37.316 "num_base_bdevs_discovered": 1, 00:20:37.316 "num_base_bdevs_operational": 2, 00:20:37.316 "base_bdevs_list": [ 00:20:37.316 { 00:20:37.316 "name": null, 00:20:37.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.316 "is_configured": false, 00:20:37.316 "data_offset": 2048, 00:20:37.316 "data_size": 63488 00:20:37.316 }, 00:20:37.316 { 00:20:37.316 "name": "pt2", 00:20:37.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.316 "is_configured": true, 00:20:37.316 "data_offset": 2048, 00:20:37.316 "data_size": 63488 00:20:37.316 }, 00:20:37.316 { 00:20:37.316 "name": null, 00:20:37.316 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:37.316 "is_configured": false, 00:20:37.316 "data_offset": 2048, 00:20:37.316 "data_size": 63488 00:20:37.316 } 00:20:37.316 ] 00:20:37.316 }' 00:20:37.316 04:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.316 04:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.574 [2024-11-27 04:42:25.150644] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:37.574 [2024-11-27 04:42:25.150882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.574 [2024-11-27 04:42:25.150925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:37.574 [2024-11-27 04:42:25.150944] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.574 [2024-11-27 04:42:25.151547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.574 [2024-11-27 04:42:25.151584] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:37.574 [2024-11-27 04:42:25.151694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:37.574 [2024-11-27 04:42:25.151742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:37.574 [2024-11-27 04:42:25.151905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:37.574 [2024-11-27 04:42:25.151927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:37.574 [2024-11-27 04:42:25.152238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:37.574 [2024-11-27 04:42:25.157134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:37.574 [2024-11-27 04:42:25.157160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:37.574 [2024-11-27 04:42:25.157583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.574 pt3 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.574 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.832 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.832 "name": "raid_bdev1", 00:20:37.832 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:37.832 "strip_size_kb": 64, 00:20:37.832 "state": "online", 00:20:37.832 "raid_level": "raid5f", 00:20:37.832 "superblock": true, 00:20:37.832 "num_base_bdevs": 3, 00:20:37.832 "num_base_bdevs_discovered": 2, 00:20:37.832 "num_base_bdevs_operational": 2, 00:20:37.832 "base_bdevs_list": [ 00:20:37.832 { 00:20:37.832 "name": null, 00:20:37.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.832 "is_configured": false, 00:20:37.832 "data_offset": 2048, 00:20:37.832 "data_size": 63488 00:20:37.832 }, 00:20:37.832 { 00:20:37.832 "name": "pt2", 00:20:37.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.832 "is_configured": true, 00:20:37.832 "data_offset": 2048, 00:20:37.832 "data_size": 63488 00:20:37.832 }, 00:20:37.832 { 00:20:37.832 "name": "pt3", 00:20:37.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:37.832 "is_configured": true, 00:20:37.832 "data_offset": 2048, 00:20:37.832 "data_size": 63488 00:20:37.832 } 00:20:37.832 ] 00:20:37.832 }' 00:20:37.832 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.832 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.090 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:38.090 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.090 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.090 [2024-11-27 04:42:25.675241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:38.090 [2024-11-27 04:42:25.675444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:38.090 [2024-11-27 04:42:25.675657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.090 [2024-11-27 04:42:25.675757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:38.090 [2024-11-27 04:42:25.675809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:38.090 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.090 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.090 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:38.090 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.090 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.090 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.349 [2024-11-27 04:42:25.751352] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:38.349 [2024-11-27 04:42:25.751453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.349 [2024-11-27 04:42:25.751485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:38.349 [2024-11-27 04:42:25.751499] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.349 [2024-11-27 04:42:25.754481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.349 [2024-11-27 04:42:25.754529] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:38.349 [2024-11-27 04:42:25.754655] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:38.349 [2024-11-27 04:42:25.754735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:38.349 [2024-11-27 04:42:25.754942] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:38.349 [2024-11-27 04:42:25.754963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:38.349 [2024-11-27 04:42:25.754987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:38.349 [2024-11-27 04:42:25.755052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:38.349 pt1 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.349 "name": "raid_bdev1", 00:20:38.349 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:38.349 "strip_size_kb": 64, 00:20:38.349 "state": "configuring", 00:20:38.349 "raid_level": "raid5f", 00:20:38.349 "superblock": true, 00:20:38.349 "num_base_bdevs": 3, 00:20:38.349 "num_base_bdevs_discovered": 1, 00:20:38.349 "num_base_bdevs_operational": 2, 00:20:38.349 "base_bdevs_list": [ 00:20:38.349 { 00:20:38.349 "name": null, 00:20:38.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.349 "is_configured": false, 00:20:38.349 "data_offset": 2048, 00:20:38.349 "data_size": 63488 00:20:38.349 }, 00:20:38.349 { 00:20:38.349 "name": "pt2", 00:20:38.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.349 "is_configured": true, 00:20:38.349 "data_offset": 2048, 00:20:38.349 "data_size": 63488 00:20:38.349 }, 00:20:38.349 { 00:20:38.349 "name": null, 00:20:38.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.349 "is_configured": false, 00:20:38.349 "data_offset": 2048, 00:20:38.349 "data_size": 63488 00:20:38.349 } 00:20:38.349 ] 00:20:38.349 }' 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.349 04:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.924 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:38.924 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:38.924 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.924 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.924 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.924 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:38.924 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:38.924 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.924 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.924 [2024-11-27 04:42:26.323465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:38.924 [2024-11-27 04:42:26.323671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.924 [2024-11-27 04:42:26.323716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:38.925 [2024-11-27 04:42:26.323733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.925 [2024-11-27 04:42:26.324385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.925 [2024-11-27 04:42:26.324418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:38.925 [2024-11-27 04:42:26.324546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:38.925 [2024-11-27 04:42:26.324577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:38.925 [2024-11-27 04:42:26.324733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:38.925 [2024-11-27 04:42:26.324755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:38.925 [2024-11-27 04:42:26.325090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:38.925 [2024-11-27 04:42:26.329974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:38.925 pt3 00:20:38.925 [2024-11-27 04:42:26.330151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:38.925 [2024-11-27 04:42:26.330467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.925 "name": "raid_bdev1", 00:20:38.925 "uuid": "f900cb16-1a59-46ae-ace8-5c39adef7634", 00:20:38.925 "strip_size_kb": 64, 00:20:38.925 "state": "online", 00:20:38.925 "raid_level": "raid5f", 00:20:38.925 "superblock": true, 00:20:38.925 "num_base_bdevs": 3, 00:20:38.925 "num_base_bdevs_discovered": 2, 00:20:38.925 "num_base_bdevs_operational": 2, 00:20:38.925 "base_bdevs_list": [ 00:20:38.925 { 00:20:38.925 "name": null, 00:20:38.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.925 "is_configured": false, 00:20:38.925 "data_offset": 2048, 00:20:38.925 "data_size": 63488 00:20:38.925 }, 00:20:38.925 { 00:20:38.925 "name": "pt2", 00:20:38.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.925 "is_configured": true, 00:20:38.925 "data_offset": 2048, 00:20:38.925 "data_size": 63488 00:20:38.925 }, 00:20:38.925 { 00:20:38.925 "name": "pt3", 00:20:38.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.925 "is_configured": true, 00:20:38.925 "data_offset": 2048, 00:20:38.925 "data_size": 63488 00:20:38.925 } 00:20:38.925 ] 00:20:38.925 }' 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.925 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.491 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.492 [2024-11-27 04:42:26.904411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f900cb16-1a59-46ae-ace8-5c39adef7634 '!=' f900cb16-1a59-46ae-ace8-5c39adef7634 ']' 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81643 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81643 ']' 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81643 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81643 00:20:39.492 killing process with pid 81643 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81643' 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81643 00:20:39.492 [2024-11-27 04:42:26.982732] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:39.492 04:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81643 00:20:39.492 [2024-11-27 04:42:26.982865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.492 [2024-11-27 04:42:26.982949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.492 [2024-11-27 04:42:26.982969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:39.751 [2024-11-27 04:42:27.253851] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:41.127 04:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:41.127 00:20:41.127 real 0m8.624s 00:20:41.127 user 0m14.085s 00:20:41.127 sys 0m1.243s 00:20:41.127 04:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.127 04:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.127 ************************************ 00:20:41.127 END TEST raid5f_superblock_test 00:20:41.127 ************************************ 00:20:41.127 04:42:28 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:41.127 04:42:28 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:20:41.127 04:42:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:41.127 04:42:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.127 04:42:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:41.127 ************************************ 00:20:41.127 START TEST raid5f_rebuild_test 00:20:41.127 ************************************ 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:41.127 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82093 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82093 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82093 ']' 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.128 04:42:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.128 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:41.128 Zero copy mechanism will not be used. 00:20:41.128 [2024-11-27 04:42:28.490464] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:41.128 [2024-11-27 04:42:28.490621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82093 ] 00:20:41.128 [2024-11-27 04:42:28.672150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.443 [2024-11-27 04:42:28.835820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.716 [2024-11-27 04:42:29.073964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.716 [2024-11-27 04:42:29.074048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.974 BaseBdev1_malloc 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.974 [2024-11-27 04:42:29.540875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:41.974 [2024-11-27 04:42:29.541106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.974 [2024-11-27 04:42:29.541149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:41.974 [2024-11-27 04:42:29.541170] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.974 [2024-11-27 04:42:29.544036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.974 [2024-11-27 04:42:29.544087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:41.974 BaseBdev1 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.974 BaseBdev2_malloc 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.974 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.974 [2024-11-27 04:42:29.593694] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:41.974 [2024-11-27 04:42:29.593793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.974 [2024-11-27 04:42:29.593830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:41.974 [2024-11-27 04:42:29.593847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.233 [2024-11-27 04:42:29.596650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.233 [2024-11-27 04:42:29.596701] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:42.233 BaseBdev2 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.233 BaseBdev3_malloc 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.233 [2024-11-27 04:42:29.659845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:42.233 [2024-11-27 04:42:29.659936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.233 [2024-11-27 04:42:29.659972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:42.233 [2024-11-27 04:42:29.659992] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.233 [2024-11-27 04:42:29.662931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.233 [2024-11-27 04:42:29.662983] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:42.233 BaseBdev3 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.233 spare_malloc 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.233 spare_delay 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.233 [2024-11-27 04:42:29.724675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:42.233 [2024-11-27 04:42:29.724764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.233 [2024-11-27 04:42:29.724816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:42.233 [2024-11-27 04:42:29.724836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.233 [2024-11-27 04:42:29.728321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.233 [2024-11-27 04:42:29.728398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:42.233 spare 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.233 [2024-11-27 04:42:29.736881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:42.233 [2024-11-27 04:42:29.739579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:42.233 [2024-11-27 04:42:29.739822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:42.233 [2024-11-27 04:42:29.740089] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:42.233 [2024-11-27 04:42:29.740201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:42.233 [2024-11-27 04:42:29.740611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:42.233 [2024-11-27 04:42:29.745980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:42.233 [2024-11-27 04:42:29.746135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:42.233 [2024-11-27 04:42:29.746481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.233 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.233 "name": "raid_bdev1", 00:20:42.233 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:42.233 "strip_size_kb": 64, 00:20:42.233 "state": "online", 00:20:42.233 "raid_level": "raid5f", 00:20:42.233 "superblock": false, 00:20:42.233 "num_base_bdevs": 3, 00:20:42.233 "num_base_bdevs_discovered": 3, 00:20:42.233 "num_base_bdevs_operational": 3, 00:20:42.233 "base_bdevs_list": [ 00:20:42.233 { 00:20:42.234 "name": "BaseBdev1", 00:20:42.234 "uuid": "c14c462d-4315-576a-a162-f7433119b1e2", 00:20:42.234 "is_configured": true, 00:20:42.234 "data_offset": 0, 00:20:42.234 "data_size": 65536 00:20:42.234 }, 00:20:42.234 { 00:20:42.234 "name": "BaseBdev2", 00:20:42.234 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:42.234 "is_configured": true, 00:20:42.234 "data_offset": 0, 00:20:42.234 "data_size": 65536 00:20:42.234 }, 00:20:42.234 { 00:20:42.234 "name": "BaseBdev3", 00:20:42.234 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:42.234 "is_configured": true, 00:20:42.234 "data_offset": 0, 00:20:42.234 "data_size": 65536 00:20:42.234 } 00:20:42.234 ] 00:20:42.234 }' 00:20:42.234 04:42:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.234 04:42:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.801 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:42.801 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.801 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:42.801 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.801 [2024-11-27 04:42:30.284640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.801 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.801 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:20:42.801 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.801 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:42.802 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:43.378 [2024-11-27 04:42:30.692568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:43.378 /dev/nbd0 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.378 1+0 records in 00:20:43.378 1+0 records out 00:20:43.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311054 s, 13.2 MB/s 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:20:43.378 04:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:20:43.946 512+0 records in 00:20:43.946 512+0 records out 00:20:43.946 67108864 bytes (67 MB, 64 MiB) copied, 0.503738 s, 133 MB/s 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:43.946 [2024-11-27 04:42:31.539468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.946 [2024-11-27 04:42:31.557305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.946 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.206 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.206 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.206 04:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.206 04:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.206 04:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.206 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.206 "name": "raid_bdev1", 00:20:44.206 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:44.206 "strip_size_kb": 64, 00:20:44.206 "state": "online", 00:20:44.206 "raid_level": "raid5f", 00:20:44.206 "superblock": false, 00:20:44.206 "num_base_bdevs": 3, 00:20:44.206 "num_base_bdevs_discovered": 2, 00:20:44.206 "num_base_bdevs_operational": 2, 00:20:44.206 "base_bdevs_list": [ 00:20:44.206 { 00:20:44.206 "name": null, 00:20:44.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.206 "is_configured": false, 00:20:44.206 "data_offset": 0, 00:20:44.206 "data_size": 65536 00:20:44.206 }, 00:20:44.206 { 00:20:44.206 "name": "BaseBdev2", 00:20:44.206 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:44.206 "is_configured": true, 00:20:44.206 "data_offset": 0, 00:20:44.206 "data_size": 65536 00:20:44.206 }, 00:20:44.206 { 00:20:44.206 "name": "BaseBdev3", 00:20:44.206 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:44.206 "is_configured": true, 00:20:44.206 "data_offset": 0, 00:20:44.206 "data_size": 65536 00:20:44.206 } 00:20:44.206 ] 00:20:44.206 }' 00:20:44.206 04:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.206 04:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.465 04:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:44.465 04:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.465 04:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.465 [2024-11-27 04:42:32.081423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:44.725 [2024-11-27 04:42:32.096789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:20:44.725 04:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.725 04:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:44.725 [2024-11-27 04:42:32.104324] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.659 "name": "raid_bdev1", 00:20:45.659 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:45.659 "strip_size_kb": 64, 00:20:45.659 "state": "online", 00:20:45.659 "raid_level": "raid5f", 00:20:45.659 "superblock": false, 00:20:45.659 "num_base_bdevs": 3, 00:20:45.659 "num_base_bdevs_discovered": 3, 00:20:45.659 "num_base_bdevs_operational": 3, 00:20:45.659 "process": { 00:20:45.659 "type": "rebuild", 00:20:45.659 "target": "spare", 00:20:45.659 "progress": { 00:20:45.659 "blocks": 18432, 00:20:45.659 "percent": 14 00:20:45.659 } 00:20:45.659 }, 00:20:45.659 "base_bdevs_list": [ 00:20:45.659 { 00:20:45.659 "name": "spare", 00:20:45.659 "uuid": "a912b01c-f9ee-567c-9928-216a7d4427a5", 00:20:45.659 "is_configured": true, 00:20:45.659 "data_offset": 0, 00:20:45.659 "data_size": 65536 00:20:45.659 }, 00:20:45.659 { 00:20:45.659 "name": "BaseBdev2", 00:20:45.659 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:45.659 "is_configured": true, 00:20:45.659 "data_offset": 0, 00:20:45.659 "data_size": 65536 00:20:45.659 }, 00:20:45.659 { 00:20:45.659 "name": "BaseBdev3", 00:20:45.659 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:45.659 "is_configured": true, 00:20:45.659 "data_offset": 0, 00:20:45.659 "data_size": 65536 00:20:45.659 } 00:20:45.659 ] 00:20:45.659 }' 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.659 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.659 [2024-11-27 04:42:33.266806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.917 [2024-11-27 04:42:33.320348] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:45.917 [2024-11-27 04:42:33.320447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.917 [2024-11-27 04:42:33.320479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.917 [2024-11-27 04:42:33.320491] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.917 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.918 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.918 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.918 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.918 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.918 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.918 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.918 "name": "raid_bdev1", 00:20:45.918 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:45.918 "strip_size_kb": 64, 00:20:45.918 "state": "online", 00:20:45.918 "raid_level": "raid5f", 00:20:45.918 "superblock": false, 00:20:45.918 "num_base_bdevs": 3, 00:20:45.918 "num_base_bdevs_discovered": 2, 00:20:45.918 "num_base_bdevs_operational": 2, 00:20:45.918 "base_bdevs_list": [ 00:20:45.918 { 00:20:45.918 "name": null, 00:20:45.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.918 "is_configured": false, 00:20:45.918 "data_offset": 0, 00:20:45.918 "data_size": 65536 00:20:45.918 }, 00:20:45.918 { 00:20:45.918 "name": "BaseBdev2", 00:20:45.918 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:45.918 "is_configured": true, 00:20:45.918 "data_offset": 0, 00:20:45.918 "data_size": 65536 00:20:45.918 }, 00:20:45.918 { 00:20:45.918 "name": "BaseBdev3", 00:20:45.918 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:45.918 "is_configured": true, 00:20:45.918 "data_offset": 0, 00:20:45.918 "data_size": 65536 00:20:45.918 } 00:20:45.918 ] 00:20:45.918 }' 00:20:45.918 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.918 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:46.485 "name": "raid_bdev1", 00:20:46.485 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:46.485 "strip_size_kb": 64, 00:20:46.485 "state": "online", 00:20:46.485 "raid_level": "raid5f", 00:20:46.485 "superblock": false, 00:20:46.485 "num_base_bdevs": 3, 00:20:46.485 "num_base_bdevs_discovered": 2, 00:20:46.485 "num_base_bdevs_operational": 2, 00:20:46.485 "base_bdevs_list": [ 00:20:46.485 { 00:20:46.485 "name": null, 00:20:46.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.485 "is_configured": false, 00:20:46.485 "data_offset": 0, 00:20:46.485 "data_size": 65536 00:20:46.485 }, 00:20:46.485 { 00:20:46.485 "name": "BaseBdev2", 00:20:46.485 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:46.485 "is_configured": true, 00:20:46.485 "data_offset": 0, 00:20:46.485 "data_size": 65536 00:20:46.485 }, 00:20:46.485 { 00:20:46.485 "name": "BaseBdev3", 00:20:46.485 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:46.485 "is_configured": true, 00:20:46.485 "data_offset": 0, 00:20:46.485 "data_size": 65536 00:20:46.485 } 00:20:46.485 ] 00:20:46.485 }' 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:46.485 04:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.485 04:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:46.485 04:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:46.485 04:42:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.485 04:42:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.485 [2024-11-27 04:42:34.044420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:46.485 [2024-11-27 04:42:34.059205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:46.485 04:42:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.485 04:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:46.485 [2024-11-27 04:42:34.066658] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:47.858 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.858 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.858 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:47.858 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:47.858 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.858 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.858 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.858 04:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.858 04:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.859 "name": "raid_bdev1", 00:20:47.859 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:47.859 "strip_size_kb": 64, 00:20:47.859 "state": "online", 00:20:47.859 "raid_level": "raid5f", 00:20:47.859 "superblock": false, 00:20:47.859 "num_base_bdevs": 3, 00:20:47.859 "num_base_bdevs_discovered": 3, 00:20:47.859 "num_base_bdevs_operational": 3, 00:20:47.859 "process": { 00:20:47.859 "type": "rebuild", 00:20:47.859 "target": "spare", 00:20:47.859 "progress": { 00:20:47.859 "blocks": 18432, 00:20:47.859 "percent": 14 00:20:47.859 } 00:20:47.859 }, 00:20:47.859 "base_bdevs_list": [ 00:20:47.859 { 00:20:47.859 "name": "spare", 00:20:47.859 "uuid": "a912b01c-f9ee-567c-9928-216a7d4427a5", 00:20:47.859 "is_configured": true, 00:20:47.859 "data_offset": 0, 00:20:47.859 "data_size": 65536 00:20:47.859 }, 00:20:47.859 { 00:20:47.859 "name": "BaseBdev2", 00:20:47.859 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:47.859 "is_configured": true, 00:20:47.859 "data_offset": 0, 00:20:47.859 "data_size": 65536 00:20:47.859 }, 00:20:47.859 { 00:20:47.859 "name": "BaseBdev3", 00:20:47.859 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:47.859 "is_configured": true, 00:20:47.859 "data_offset": 0, 00:20:47.859 "data_size": 65536 00:20:47.859 } 00:20:47.859 ] 00:20:47.859 }' 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=596 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.859 "name": "raid_bdev1", 00:20:47.859 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:47.859 "strip_size_kb": 64, 00:20:47.859 "state": "online", 00:20:47.859 "raid_level": "raid5f", 00:20:47.859 "superblock": false, 00:20:47.859 "num_base_bdevs": 3, 00:20:47.859 "num_base_bdevs_discovered": 3, 00:20:47.859 "num_base_bdevs_operational": 3, 00:20:47.859 "process": { 00:20:47.859 "type": "rebuild", 00:20:47.859 "target": "spare", 00:20:47.859 "progress": { 00:20:47.859 "blocks": 22528, 00:20:47.859 "percent": 17 00:20:47.859 } 00:20:47.859 }, 00:20:47.859 "base_bdevs_list": [ 00:20:47.859 { 00:20:47.859 "name": "spare", 00:20:47.859 "uuid": "a912b01c-f9ee-567c-9928-216a7d4427a5", 00:20:47.859 "is_configured": true, 00:20:47.859 "data_offset": 0, 00:20:47.859 "data_size": 65536 00:20:47.859 }, 00:20:47.859 { 00:20:47.859 "name": "BaseBdev2", 00:20:47.859 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:47.859 "is_configured": true, 00:20:47.859 "data_offset": 0, 00:20:47.859 "data_size": 65536 00:20:47.859 }, 00:20:47.859 { 00:20:47.859 "name": "BaseBdev3", 00:20:47.859 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:47.859 "is_configured": true, 00:20:47.859 "data_offset": 0, 00:20:47.859 "data_size": 65536 00:20:47.859 } 00:20:47.859 ] 00:20:47.859 }' 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.859 04:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:48.792 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:48.792 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.792 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.792 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:48.792 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:48.792 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.050 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.051 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.051 04:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.051 04:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.051 04:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.051 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.051 "name": "raid_bdev1", 00:20:49.051 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:49.051 "strip_size_kb": 64, 00:20:49.051 "state": "online", 00:20:49.051 "raid_level": "raid5f", 00:20:49.051 "superblock": false, 00:20:49.051 "num_base_bdevs": 3, 00:20:49.051 "num_base_bdevs_discovered": 3, 00:20:49.051 "num_base_bdevs_operational": 3, 00:20:49.051 "process": { 00:20:49.051 "type": "rebuild", 00:20:49.051 "target": "spare", 00:20:49.051 "progress": { 00:20:49.051 "blocks": 47104, 00:20:49.051 "percent": 35 00:20:49.051 } 00:20:49.051 }, 00:20:49.051 "base_bdevs_list": [ 00:20:49.051 { 00:20:49.051 "name": "spare", 00:20:49.051 "uuid": "a912b01c-f9ee-567c-9928-216a7d4427a5", 00:20:49.051 "is_configured": true, 00:20:49.051 "data_offset": 0, 00:20:49.051 "data_size": 65536 00:20:49.051 }, 00:20:49.051 { 00:20:49.051 "name": "BaseBdev2", 00:20:49.051 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:49.051 "is_configured": true, 00:20:49.051 "data_offset": 0, 00:20:49.051 "data_size": 65536 00:20:49.051 }, 00:20:49.051 { 00:20:49.051 "name": "BaseBdev3", 00:20:49.051 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:49.051 "is_configured": true, 00:20:49.051 "data_offset": 0, 00:20:49.051 "data_size": 65536 00:20:49.051 } 00:20:49.051 ] 00:20:49.051 }' 00:20:49.051 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.051 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.051 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.051 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.051 04:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:49.985 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:49.985 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.985 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.985 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:49.985 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:49.985 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.985 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.985 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.985 04:42:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.985 04:42:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.244 04:42:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.244 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.244 "name": "raid_bdev1", 00:20:50.244 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:50.244 "strip_size_kb": 64, 00:20:50.244 "state": "online", 00:20:50.244 "raid_level": "raid5f", 00:20:50.244 "superblock": false, 00:20:50.244 "num_base_bdevs": 3, 00:20:50.244 "num_base_bdevs_discovered": 3, 00:20:50.244 "num_base_bdevs_operational": 3, 00:20:50.244 "process": { 00:20:50.244 "type": "rebuild", 00:20:50.244 "target": "spare", 00:20:50.244 "progress": { 00:20:50.244 "blocks": 69632, 00:20:50.244 "percent": 53 00:20:50.244 } 00:20:50.244 }, 00:20:50.244 "base_bdevs_list": [ 00:20:50.244 { 00:20:50.244 "name": "spare", 00:20:50.244 "uuid": "a912b01c-f9ee-567c-9928-216a7d4427a5", 00:20:50.244 "is_configured": true, 00:20:50.244 "data_offset": 0, 00:20:50.244 "data_size": 65536 00:20:50.244 }, 00:20:50.244 { 00:20:50.244 "name": "BaseBdev2", 00:20:50.244 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:50.244 "is_configured": true, 00:20:50.244 "data_offset": 0, 00:20:50.244 "data_size": 65536 00:20:50.244 }, 00:20:50.244 { 00:20:50.244 "name": "BaseBdev3", 00:20:50.244 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:50.244 "is_configured": true, 00:20:50.244 "data_offset": 0, 00:20:50.244 "data_size": 65536 00:20:50.244 } 00:20:50.244 ] 00:20:50.244 }' 00:20:50.244 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.244 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.244 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.244 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.244 04:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.182 04:42:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.442 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.442 "name": "raid_bdev1", 00:20:51.442 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:51.442 "strip_size_kb": 64, 00:20:51.442 "state": "online", 00:20:51.442 "raid_level": "raid5f", 00:20:51.442 "superblock": false, 00:20:51.442 "num_base_bdevs": 3, 00:20:51.442 "num_base_bdevs_discovered": 3, 00:20:51.442 "num_base_bdevs_operational": 3, 00:20:51.442 "process": { 00:20:51.442 "type": "rebuild", 00:20:51.442 "target": "spare", 00:20:51.442 "progress": { 00:20:51.442 "blocks": 94208, 00:20:51.442 "percent": 71 00:20:51.442 } 00:20:51.442 }, 00:20:51.442 "base_bdevs_list": [ 00:20:51.442 { 00:20:51.442 "name": "spare", 00:20:51.442 "uuid": "a912b01c-f9ee-567c-9928-216a7d4427a5", 00:20:51.442 "is_configured": true, 00:20:51.442 "data_offset": 0, 00:20:51.442 "data_size": 65536 00:20:51.442 }, 00:20:51.442 { 00:20:51.442 "name": "BaseBdev2", 00:20:51.442 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:51.442 "is_configured": true, 00:20:51.442 "data_offset": 0, 00:20:51.442 "data_size": 65536 00:20:51.442 }, 00:20:51.442 { 00:20:51.442 "name": "BaseBdev3", 00:20:51.442 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:51.442 "is_configured": true, 00:20:51.442 "data_offset": 0, 00:20:51.442 "data_size": 65536 00:20:51.442 } 00:20:51.442 ] 00:20:51.442 }' 00:20:51.442 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.442 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:51.442 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.442 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.442 04:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.378 "name": "raid_bdev1", 00:20:52.378 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:52.378 "strip_size_kb": 64, 00:20:52.378 "state": "online", 00:20:52.378 "raid_level": "raid5f", 00:20:52.378 "superblock": false, 00:20:52.378 "num_base_bdevs": 3, 00:20:52.378 "num_base_bdevs_discovered": 3, 00:20:52.378 "num_base_bdevs_operational": 3, 00:20:52.378 "process": { 00:20:52.378 "type": "rebuild", 00:20:52.378 "target": "spare", 00:20:52.378 "progress": { 00:20:52.378 "blocks": 116736, 00:20:52.378 "percent": 89 00:20:52.378 } 00:20:52.378 }, 00:20:52.378 "base_bdevs_list": [ 00:20:52.378 { 00:20:52.378 "name": "spare", 00:20:52.378 "uuid": "a912b01c-f9ee-567c-9928-216a7d4427a5", 00:20:52.378 "is_configured": true, 00:20:52.378 "data_offset": 0, 00:20:52.378 "data_size": 65536 00:20:52.378 }, 00:20:52.378 { 00:20:52.378 "name": "BaseBdev2", 00:20:52.378 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:52.378 "is_configured": true, 00:20:52.378 "data_offset": 0, 00:20:52.378 "data_size": 65536 00:20:52.378 }, 00:20:52.378 { 00:20:52.378 "name": "BaseBdev3", 00:20:52.378 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:52.378 "is_configured": true, 00:20:52.378 "data_offset": 0, 00:20:52.378 "data_size": 65536 00:20:52.378 } 00:20:52.378 ] 00:20:52.378 }' 00:20:52.378 04:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.637 04:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.637 04:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.637 04:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.637 04:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:53.205 [2024-11-27 04:42:40.551621] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:53.205 [2024-11-27 04:42:40.551739] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:53.205 [2024-11-27 04:42:40.551820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.463 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:53.463 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.463 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.463 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.463 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.463 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.723 "name": "raid_bdev1", 00:20:53.723 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:53.723 "strip_size_kb": 64, 00:20:53.723 "state": "online", 00:20:53.723 "raid_level": "raid5f", 00:20:53.723 "superblock": false, 00:20:53.723 "num_base_bdevs": 3, 00:20:53.723 "num_base_bdevs_discovered": 3, 00:20:53.723 "num_base_bdevs_operational": 3, 00:20:53.723 "base_bdevs_list": [ 00:20:53.723 { 00:20:53.723 "name": "spare", 00:20:53.723 "uuid": "a912b01c-f9ee-567c-9928-216a7d4427a5", 00:20:53.723 "is_configured": true, 00:20:53.723 "data_offset": 0, 00:20:53.723 "data_size": 65536 00:20:53.723 }, 00:20:53.723 { 00:20:53.723 "name": "BaseBdev2", 00:20:53.723 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:53.723 "is_configured": true, 00:20:53.723 "data_offset": 0, 00:20:53.723 "data_size": 65536 00:20:53.723 }, 00:20:53.723 { 00:20:53.723 "name": "BaseBdev3", 00:20:53.723 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:53.723 "is_configured": true, 00:20:53.723 "data_offset": 0, 00:20:53.723 "data_size": 65536 00:20:53.723 } 00:20:53.723 ] 00:20:53.723 }' 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.723 "name": "raid_bdev1", 00:20:53.723 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:53.723 "strip_size_kb": 64, 00:20:53.723 "state": "online", 00:20:53.723 "raid_level": "raid5f", 00:20:53.723 "superblock": false, 00:20:53.723 "num_base_bdevs": 3, 00:20:53.723 "num_base_bdevs_discovered": 3, 00:20:53.723 "num_base_bdevs_operational": 3, 00:20:53.723 "base_bdevs_list": [ 00:20:53.723 { 00:20:53.723 "name": "spare", 00:20:53.723 "uuid": "a912b01c-f9ee-567c-9928-216a7d4427a5", 00:20:53.723 "is_configured": true, 00:20:53.723 "data_offset": 0, 00:20:53.723 "data_size": 65536 00:20:53.723 }, 00:20:53.723 { 00:20:53.723 "name": "BaseBdev2", 00:20:53.723 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:53.723 "is_configured": true, 00:20:53.723 "data_offset": 0, 00:20:53.723 "data_size": 65536 00:20:53.723 }, 00:20:53.723 { 00:20:53.723 "name": "BaseBdev3", 00:20:53.723 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:53.723 "is_configured": true, 00:20:53.723 "data_offset": 0, 00:20:53.723 "data_size": 65536 00:20:53.723 } 00:20:53.723 ] 00:20:53.723 }' 00:20:53.723 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.980 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:53.980 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.980 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:53.980 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:53.980 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.980 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.980 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:53.980 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:53.980 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:53.980 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.981 "name": "raid_bdev1", 00:20:53.981 "uuid": "a06cfe27-a78d-4d84-b70f-b16b55f35408", 00:20:53.981 "strip_size_kb": 64, 00:20:53.981 "state": "online", 00:20:53.981 "raid_level": "raid5f", 00:20:53.981 "superblock": false, 00:20:53.981 "num_base_bdevs": 3, 00:20:53.981 "num_base_bdevs_discovered": 3, 00:20:53.981 "num_base_bdevs_operational": 3, 00:20:53.981 "base_bdevs_list": [ 00:20:53.981 { 00:20:53.981 "name": "spare", 00:20:53.981 "uuid": "a912b01c-f9ee-567c-9928-216a7d4427a5", 00:20:53.981 "is_configured": true, 00:20:53.981 "data_offset": 0, 00:20:53.981 "data_size": 65536 00:20:53.981 }, 00:20:53.981 { 00:20:53.981 "name": "BaseBdev2", 00:20:53.981 "uuid": "0eb58642-eb14-5d36-8b58-e1e20cf6c45f", 00:20:53.981 "is_configured": true, 00:20:53.981 "data_offset": 0, 00:20:53.981 "data_size": 65536 00:20:53.981 }, 00:20:53.981 { 00:20:53.981 "name": "BaseBdev3", 00:20:53.981 "uuid": "8496ec83-d76b-5ce5-9d92-3a06f0124ee6", 00:20:53.981 "is_configured": true, 00:20:53.981 "data_offset": 0, 00:20:53.981 "data_size": 65536 00:20:53.981 } 00:20:53.981 ] 00:20:53.981 }' 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.981 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.547 [2024-11-27 04:42:41.927338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:54.547 [2024-11-27 04:42:41.927375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:54.547 [2024-11-27 04:42:41.927484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:54.547 [2024-11-27 04:42:41.927591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:54.547 [2024-11-27 04:42:41.927618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:54.547 04:42:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:54.805 /dev/nbd0 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:54.805 1+0 records in 00:20:54.805 1+0 records out 00:20:54.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346695 s, 11.8 MB/s 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:54.805 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:55.370 /dev/nbd1 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:55.370 1+0 records in 00:20:55.370 1+0 records out 00:20:55.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429591 s, 9.5 MB/s 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:55.370 04:42:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:55.937 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:55.937 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:55.937 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:55.937 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.937 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.937 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:55.937 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:55.937 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.937 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:55.937 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82093 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82093 ']' 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82093 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82093 00:20:56.196 killing process with pid 82093 00:20:56.196 Received shutdown signal, test time was about 60.000000 seconds 00:20:56.196 00:20:56.196 Latency(us) 00:20:56.196 [2024-11-27T04:42:43.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.196 [2024-11-27T04:42:43.819Z] =================================================================================================================== 00:20:56.196 [2024-11-27T04:42:43.819Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82093' 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82093 00:20:56.196 04:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82093 00:20:56.196 [2024-11-27 04:42:43.706527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.762 [2024-11-27 04:42:44.082206] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:57.717 00:20:57.717 real 0m16.791s 00:20:57.717 user 0m21.623s 00:20:57.717 sys 0m2.119s 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.717 ************************************ 00:20:57.717 END TEST raid5f_rebuild_test 00:20:57.717 ************************************ 00:20:57.717 04:42:45 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:20:57.717 04:42:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:57.717 04:42:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.717 04:42:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:57.717 ************************************ 00:20:57.717 START TEST raid5f_rebuild_test_sb 00:20:57.717 ************************************ 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82543 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82543 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82543 ']' 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.717 04:42:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.992 [2024-11-27 04:42:45.337585] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:57.992 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:57.992 Zero copy mechanism will not be used. 00:20:57.992 [2024-11-27 04:42:45.338011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82543 ] 00:20:57.992 [2024-11-27 04:42:45.519275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.251 [2024-11-27 04:42:45.656013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.251 [2024-11-27 04:42:45.862888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.251 [2024-11-27 04:42:45.862974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.817 BaseBdev1_malloc 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.817 [2024-11-27 04:42:46.349195] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:58.817 [2024-11-27 04:42:46.349419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.817 [2024-11-27 04:42:46.349506] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:58.817 [2024-11-27 04:42:46.349630] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.817 [2024-11-27 04:42:46.352677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.817 [2024-11-27 04:42:46.352729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:58.817 BaseBdev1 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.817 BaseBdev2_malloc 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.817 [2024-11-27 04:42:46.402333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:58.817 [2024-11-27 04:42:46.402416] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.817 [2024-11-27 04:42:46.402460] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:58.817 [2024-11-27 04:42:46.402490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.817 [2024-11-27 04:42:46.405362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.817 [2024-11-27 04:42:46.405550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:58.817 BaseBdev2 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.817 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.076 BaseBdev3_malloc 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.076 [2024-11-27 04:42:46.463131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:59.076 [2024-11-27 04:42:46.463352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.076 [2024-11-27 04:42:46.463399] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:59.076 [2024-11-27 04:42:46.463421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.076 [2024-11-27 04:42:46.466329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.076 [2024-11-27 04:42:46.466380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:59.076 BaseBdev3 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.076 spare_malloc 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.076 spare_delay 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.076 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.076 [2024-11-27 04:42:46.527338] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:59.076 [2024-11-27 04:42:46.527420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.076 [2024-11-27 04:42:46.527453] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:59.076 [2024-11-27 04:42:46.527473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.077 [2024-11-27 04:42:46.530364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.077 [2024-11-27 04:42:46.530419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:59.077 spare 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.077 [2024-11-27 04:42:46.535448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.077 [2024-11-27 04:42:46.538795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:59.077 [2024-11-27 04:42:46.538956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:59.077 [2024-11-27 04:42:46.539392] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:59.077 [2024-11-27 04:42:46.539421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:59.077 [2024-11-27 04:42:46.540014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:59.077 [2024-11-27 04:42:46.549076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:59.077 [2024-11-27 04:42:46.549141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:59.077 [2024-11-27 04:42:46.549591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.077 "name": "raid_bdev1", 00:20:59.077 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:20:59.077 "strip_size_kb": 64, 00:20:59.077 "state": "online", 00:20:59.077 "raid_level": "raid5f", 00:20:59.077 "superblock": true, 00:20:59.077 "num_base_bdevs": 3, 00:20:59.077 "num_base_bdevs_discovered": 3, 00:20:59.077 "num_base_bdevs_operational": 3, 00:20:59.077 "base_bdevs_list": [ 00:20:59.077 { 00:20:59.077 "name": "BaseBdev1", 00:20:59.077 "uuid": "660957bb-4052-5c0a-b308-24e26815d199", 00:20:59.077 "is_configured": true, 00:20:59.077 "data_offset": 2048, 00:20:59.077 "data_size": 63488 00:20:59.077 }, 00:20:59.077 { 00:20:59.077 "name": "BaseBdev2", 00:20:59.077 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:20:59.077 "is_configured": true, 00:20:59.077 "data_offset": 2048, 00:20:59.077 "data_size": 63488 00:20:59.077 }, 00:20:59.077 { 00:20:59.077 "name": "BaseBdev3", 00:20:59.077 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:20:59.077 "is_configured": true, 00:20:59.077 "data_offset": 2048, 00:20:59.077 "data_size": 63488 00:20:59.077 } 00:20:59.077 ] 00:20:59.077 }' 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.077 04:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.644 [2024-11-27 04:42:47.089353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:59.644 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:59.902 [2024-11-27 04:42:47.485291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:59.902 /dev/nbd0 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:00.161 1+0 records in 00:21:00.161 1+0 records out 00:21:00.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527793 s, 7.8 MB/s 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:21:00.161 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:21:00.419 496+0 records in 00:21:00.419 496+0 records out 00:21:00.419 65011712 bytes (65 MB, 62 MiB) copied, 0.433601 s, 150 MB/s 00:21:00.419 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:00.419 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:00.419 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:00.419 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:00.419 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:00.419 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.419 04:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:00.987 [2024-11-27 04:42:48.304490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.987 [2024-11-27 04:42:48.318458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.987 "name": "raid_bdev1", 00:21:00.987 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:00.987 "strip_size_kb": 64, 00:21:00.987 "state": "online", 00:21:00.987 "raid_level": "raid5f", 00:21:00.987 "superblock": true, 00:21:00.987 "num_base_bdevs": 3, 00:21:00.987 "num_base_bdevs_discovered": 2, 00:21:00.987 "num_base_bdevs_operational": 2, 00:21:00.987 "base_bdevs_list": [ 00:21:00.987 { 00:21:00.987 "name": null, 00:21:00.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.987 "is_configured": false, 00:21:00.987 "data_offset": 0, 00:21:00.987 "data_size": 63488 00:21:00.987 }, 00:21:00.987 { 00:21:00.987 "name": "BaseBdev2", 00:21:00.987 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:00.987 "is_configured": true, 00:21:00.987 "data_offset": 2048, 00:21:00.987 "data_size": 63488 00:21:00.987 }, 00:21:00.987 { 00:21:00.987 "name": "BaseBdev3", 00:21:00.987 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:00.987 "is_configured": true, 00:21:00.987 "data_offset": 2048, 00:21:00.987 "data_size": 63488 00:21:00.987 } 00:21:00.987 ] 00:21:00.987 }' 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.987 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.247 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:01.247 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.247 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.247 [2024-11-27 04:42:48.838645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.247 [2024-11-27 04:42:48.854234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:21:01.247 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.247 04:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:01.247 [2024-11-27 04:42:48.865099] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.624 "name": "raid_bdev1", 00:21:02.624 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:02.624 "strip_size_kb": 64, 00:21:02.624 "state": "online", 00:21:02.624 "raid_level": "raid5f", 00:21:02.624 "superblock": true, 00:21:02.624 "num_base_bdevs": 3, 00:21:02.624 "num_base_bdevs_discovered": 3, 00:21:02.624 "num_base_bdevs_operational": 3, 00:21:02.624 "process": { 00:21:02.624 "type": "rebuild", 00:21:02.624 "target": "spare", 00:21:02.624 "progress": { 00:21:02.624 "blocks": 18432, 00:21:02.624 "percent": 14 00:21:02.624 } 00:21:02.624 }, 00:21:02.624 "base_bdevs_list": [ 00:21:02.624 { 00:21:02.624 "name": "spare", 00:21:02.624 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:02.624 "is_configured": true, 00:21:02.624 "data_offset": 2048, 00:21:02.624 "data_size": 63488 00:21:02.624 }, 00:21:02.624 { 00:21:02.624 "name": "BaseBdev2", 00:21:02.624 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:02.624 "is_configured": true, 00:21:02.624 "data_offset": 2048, 00:21:02.624 "data_size": 63488 00:21:02.624 }, 00:21:02.624 { 00:21:02.624 "name": "BaseBdev3", 00:21:02.624 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:02.624 "is_configured": true, 00:21:02.624 "data_offset": 2048, 00:21:02.624 "data_size": 63488 00:21:02.624 } 00:21:02.624 ] 00:21:02.624 }' 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.624 04:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.624 [2024-11-27 04:42:50.044045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:02.624 [2024-11-27 04:42:50.081172] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:02.624 [2024-11-27 04:42:50.081254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.624 [2024-11-27 04:42:50.081284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:02.624 [2024-11-27 04:42:50.081296] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.624 "name": "raid_bdev1", 00:21:02.624 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:02.624 "strip_size_kb": 64, 00:21:02.624 "state": "online", 00:21:02.624 "raid_level": "raid5f", 00:21:02.624 "superblock": true, 00:21:02.624 "num_base_bdevs": 3, 00:21:02.624 "num_base_bdevs_discovered": 2, 00:21:02.624 "num_base_bdevs_operational": 2, 00:21:02.624 "base_bdevs_list": [ 00:21:02.624 { 00:21:02.624 "name": null, 00:21:02.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.624 "is_configured": false, 00:21:02.624 "data_offset": 0, 00:21:02.624 "data_size": 63488 00:21:02.624 }, 00:21:02.624 { 00:21:02.624 "name": "BaseBdev2", 00:21:02.624 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:02.624 "is_configured": true, 00:21:02.624 "data_offset": 2048, 00:21:02.624 "data_size": 63488 00:21:02.624 }, 00:21:02.624 { 00:21:02.624 "name": "BaseBdev3", 00:21:02.624 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:02.624 "is_configured": true, 00:21:02.624 "data_offset": 2048, 00:21:02.624 "data_size": 63488 00:21:02.624 } 00:21:02.624 ] 00:21:02.624 }' 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.624 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.204 "name": "raid_bdev1", 00:21:03.204 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:03.204 "strip_size_kb": 64, 00:21:03.204 "state": "online", 00:21:03.204 "raid_level": "raid5f", 00:21:03.204 "superblock": true, 00:21:03.204 "num_base_bdevs": 3, 00:21:03.204 "num_base_bdevs_discovered": 2, 00:21:03.204 "num_base_bdevs_operational": 2, 00:21:03.204 "base_bdevs_list": [ 00:21:03.204 { 00:21:03.204 "name": null, 00:21:03.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.204 "is_configured": false, 00:21:03.204 "data_offset": 0, 00:21:03.204 "data_size": 63488 00:21:03.204 }, 00:21:03.204 { 00:21:03.204 "name": "BaseBdev2", 00:21:03.204 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:03.204 "is_configured": true, 00:21:03.204 "data_offset": 2048, 00:21:03.204 "data_size": 63488 00:21:03.204 }, 00:21:03.204 { 00:21:03.204 "name": "BaseBdev3", 00:21:03.204 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:03.204 "is_configured": true, 00:21:03.204 "data_offset": 2048, 00:21:03.204 "data_size": 63488 00:21:03.204 } 00:21:03.204 ] 00:21:03.204 }' 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.204 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.204 [2024-11-27 04:42:50.820592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.462 [2024-11-27 04:42:50.835976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:21:03.463 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.463 04:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:03.463 [2024-11-27 04:42:50.843398] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.399 "name": "raid_bdev1", 00:21:04.399 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:04.399 "strip_size_kb": 64, 00:21:04.399 "state": "online", 00:21:04.399 "raid_level": "raid5f", 00:21:04.399 "superblock": true, 00:21:04.399 "num_base_bdevs": 3, 00:21:04.399 "num_base_bdevs_discovered": 3, 00:21:04.399 "num_base_bdevs_operational": 3, 00:21:04.399 "process": { 00:21:04.399 "type": "rebuild", 00:21:04.399 "target": "spare", 00:21:04.399 "progress": { 00:21:04.399 "blocks": 18432, 00:21:04.399 "percent": 14 00:21:04.399 } 00:21:04.399 }, 00:21:04.399 "base_bdevs_list": [ 00:21:04.399 { 00:21:04.399 "name": "spare", 00:21:04.399 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:04.399 "is_configured": true, 00:21:04.399 "data_offset": 2048, 00:21:04.399 "data_size": 63488 00:21:04.399 }, 00:21:04.399 { 00:21:04.399 "name": "BaseBdev2", 00:21:04.399 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:04.399 "is_configured": true, 00:21:04.399 "data_offset": 2048, 00:21:04.399 "data_size": 63488 00:21:04.399 }, 00:21:04.399 { 00:21:04.399 "name": "BaseBdev3", 00:21:04.399 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:04.399 "is_configured": true, 00:21:04.399 "data_offset": 2048, 00:21:04.399 "data_size": 63488 00:21:04.399 } 00:21:04.399 ] 00:21:04.399 }' 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.399 04:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:04.399 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=613 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.399 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.659 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.659 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.659 "name": "raid_bdev1", 00:21:04.659 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:04.659 "strip_size_kb": 64, 00:21:04.659 "state": "online", 00:21:04.659 "raid_level": "raid5f", 00:21:04.659 "superblock": true, 00:21:04.659 "num_base_bdevs": 3, 00:21:04.659 "num_base_bdevs_discovered": 3, 00:21:04.659 "num_base_bdevs_operational": 3, 00:21:04.659 "process": { 00:21:04.659 "type": "rebuild", 00:21:04.659 "target": "spare", 00:21:04.659 "progress": { 00:21:04.659 "blocks": 22528, 00:21:04.659 "percent": 17 00:21:04.659 } 00:21:04.659 }, 00:21:04.659 "base_bdevs_list": [ 00:21:04.659 { 00:21:04.659 "name": "spare", 00:21:04.659 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:04.659 "is_configured": true, 00:21:04.659 "data_offset": 2048, 00:21:04.659 "data_size": 63488 00:21:04.659 }, 00:21:04.659 { 00:21:04.659 "name": "BaseBdev2", 00:21:04.659 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:04.659 "is_configured": true, 00:21:04.659 "data_offset": 2048, 00:21:04.659 "data_size": 63488 00:21:04.659 }, 00:21:04.659 { 00:21:04.659 "name": "BaseBdev3", 00:21:04.659 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:04.659 "is_configured": true, 00:21:04.659 "data_offset": 2048, 00:21:04.659 "data_size": 63488 00:21:04.659 } 00:21:04.659 ] 00:21:04.659 }' 00:21:04.659 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.659 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.659 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.659 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.659 04:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.594 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.852 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:05.852 "name": "raid_bdev1", 00:21:05.852 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:05.852 "strip_size_kb": 64, 00:21:05.852 "state": "online", 00:21:05.852 "raid_level": "raid5f", 00:21:05.852 "superblock": true, 00:21:05.852 "num_base_bdevs": 3, 00:21:05.852 "num_base_bdevs_discovered": 3, 00:21:05.852 "num_base_bdevs_operational": 3, 00:21:05.852 "process": { 00:21:05.852 "type": "rebuild", 00:21:05.852 "target": "spare", 00:21:05.852 "progress": { 00:21:05.852 "blocks": 47104, 00:21:05.852 "percent": 37 00:21:05.852 } 00:21:05.852 }, 00:21:05.852 "base_bdevs_list": [ 00:21:05.852 { 00:21:05.852 "name": "spare", 00:21:05.852 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:05.852 "is_configured": true, 00:21:05.852 "data_offset": 2048, 00:21:05.852 "data_size": 63488 00:21:05.852 }, 00:21:05.852 { 00:21:05.852 "name": "BaseBdev2", 00:21:05.852 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:05.852 "is_configured": true, 00:21:05.852 "data_offset": 2048, 00:21:05.852 "data_size": 63488 00:21:05.852 }, 00:21:05.852 { 00:21:05.852 "name": "BaseBdev3", 00:21:05.852 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:05.852 "is_configured": true, 00:21:05.852 "data_offset": 2048, 00:21:05.852 "data_size": 63488 00:21:05.852 } 00:21:05.852 ] 00:21:05.852 }' 00:21:05.852 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:05.852 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:05.852 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:05.852 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:05.852 04:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.787 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.045 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.045 "name": "raid_bdev1", 00:21:07.045 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:07.045 "strip_size_kb": 64, 00:21:07.045 "state": "online", 00:21:07.045 "raid_level": "raid5f", 00:21:07.045 "superblock": true, 00:21:07.045 "num_base_bdevs": 3, 00:21:07.045 "num_base_bdevs_discovered": 3, 00:21:07.045 "num_base_bdevs_operational": 3, 00:21:07.045 "process": { 00:21:07.045 "type": "rebuild", 00:21:07.045 "target": "spare", 00:21:07.045 "progress": { 00:21:07.045 "blocks": 69632, 00:21:07.045 "percent": 54 00:21:07.045 } 00:21:07.045 }, 00:21:07.045 "base_bdevs_list": [ 00:21:07.045 { 00:21:07.045 "name": "spare", 00:21:07.045 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:07.045 "is_configured": true, 00:21:07.045 "data_offset": 2048, 00:21:07.045 "data_size": 63488 00:21:07.045 }, 00:21:07.045 { 00:21:07.045 "name": "BaseBdev2", 00:21:07.045 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:07.045 "is_configured": true, 00:21:07.045 "data_offset": 2048, 00:21:07.045 "data_size": 63488 00:21:07.045 }, 00:21:07.045 { 00:21:07.045 "name": "BaseBdev3", 00:21:07.045 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:07.045 "is_configured": true, 00:21:07.045 "data_offset": 2048, 00:21:07.045 "data_size": 63488 00:21:07.045 } 00:21:07.045 ] 00:21:07.045 }' 00:21:07.045 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.045 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.045 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.045 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.045 04:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:07.979 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:07.979 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.980 "name": "raid_bdev1", 00:21:07.980 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:07.980 "strip_size_kb": 64, 00:21:07.980 "state": "online", 00:21:07.980 "raid_level": "raid5f", 00:21:07.980 "superblock": true, 00:21:07.980 "num_base_bdevs": 3, 00:21:07.980 "num_base_bdevs_discovered": 3, 00:21:07.980 "num_base_bdevs_operational": 3, 00:21:07.980 "process": { 00:21:07.980 "type": "rebuild", 00:21:07.980 "target": "spare", 00:21:07.980 "progress": { 00:21:07.980 "blocks": 94208, 00:21:07.980 "percent": 74 00:21:07.980 } 00:21:07.980 }, 00:21:07.980 "base_bdevs_list": [ 00:21:07.980 { 00:21:07.980 "name": "spare", 00:21:07.980 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:07.980 "is_configured": true, 00:21:07.980 "data_offset": 2048, 00:21:07.980 "data_size": 63488 00:21:07.980 }, 00:21:07.980 { 00:21:07.980 "name": "BaseBdev2", 00:21:07.980 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:07.980 "is_configured": true, 00:21:07.980 "data_offset": 2048, 00:21:07.980 "data_size": 63488 00:21:07.980 }, 00:21:07.980 { 00:21:07.980 "name": "BaseBdev3", 00:21:07.980 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:07.980 "is_configured": true, 00:21:07.980 "data_offset": 2048, 00:21:07.980 "data_size": 63488 00:21:07.980 } 00:21:07.980 ] 00:21:07.980 }' 00:21:07.980 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.239 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:08.239 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:08.239 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:08.239 04:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.197 "name": "raid_bdev1", 00:21:09.197 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:09.197 "strip_size_kb": 64, 00:21:09.197 "state": "online", 00:21:09.197 "raid_level": "raid5f", 00:21:09.197 "superblock": true, 00:21:09.197 "num_base_bdevs": 3, 00:21:09.197 "num_base_bdevs_discovered": 3, 00:21:09.197 "num_base_bdevs_operational": 3, 00:21:09.197 "process": { 00:21:09.197 "type": "rebuild", 00:21:09.197 "target": "spare", 00:21:09.197 "progress": { 00:21:09.197 "blocks": 118784, 00:21:09.197 "percent": 93 00:21:09.197 } 00:21:09.197 }, 00:21:09.197 "base_bdevs_list": [ 00:21:09.197 { 00:21:09.197 "name": "spare", 00:21:09.197 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:09.197 "is_configured": true, 00:21:09.197 "data_offset": 2048, 00:21:09.197 "data_size": 63488 00:21:09.197 }, 00:21:09.197 { 00:21:09.197 "name": "BaseBdev2", 00:21:09.197 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:09.197 "is_configured": true, 00:21:09.197 "data_offset": 2048, 00:21:09.197 "data_size": 63488 00:21:09.197 }, 00:21:09.197 { 00:21:09.197 "name": "BaseBdev3", 00:21:09.197 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:09.197 "is_configured": true, 00:21:09.197 "data_offset": 2048, 00:21:09.197 "data_size": 63488 00:21:09.197 } 00:21:09.197 ] 00:21:09.197 }' 00:21:09.197 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.457 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.457 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.457 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.457 04:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:09.716 [2024-11-27 04:42:57.120401] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:09.716 [2024-11-27 04:42:57.120517] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:09.716 [2024-11-27 04:42:57.120687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.296 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.555 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.555 "name": "raid_bdev1", 00:21:10.555 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:10.555 "strip_size_kb": 64, 00:21:10.555 "state": "online", 00:21:10.555 "raid_level": "raid5f", 00:21:10.555 "superblock": true, 00:21:10.555 "num_base_bdevs": 3, 00:21:10.555 "num_base_bdevs_discovered": 3, 00:21:10.555 "num_base_bdevs_operational": 3, 00:21:10.555 "base_bdevs_list": [ 00:21:10.555 { 00:21:10.555 "name": "spare", 00:21:10.555 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:10.555 "is_configured": true, 00:21:10.555 "data_offset": 2048, 00:21:10.555 "data_size": 63488 00:21:10.555 }, 00:21:10.555 { 00:21:10.555 "name": "BaseBdev2", 00:21:10.555 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:10.555 "is_configured": true, 00:21:10.555 "data_offset": 2048, 00:21:10.555 "data_size": 63488 00:21:10.555 }, 00:21:10.555 { 00:21:10.555 "name": "BaseBdev3", 00:21:10.555 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:10.555 "is_configured": true, 00:21:10.555 "data_offset": 2048, 00:21:10.555 "data_size": 63488 00:21:10.555 } 00:21:10.555 ] 00:21:10.555 }' 00:21:10.555 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.556 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:10.556 04:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.556 "name": "raid_bdev1", 00:21:10.556 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:10.556 "strip_size_kb": 64, 00:21:10.556 "state": "online", 00:21:10.556 "raid_level": "raid5f", 00:21:10.556 "superblock": true, 00:21:10.556 "num_base_bdevs": 3, 00:21:10.556 "num_base_bdevs_discovered": 3, 00:21:10.556 "num_base_bdevs_operational": 3, 00:21:10.556 "base_bdevs_list": [ 00:21:10.556 { 00:21:10.556 "name": "spare", 00:21:10.556 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:10.556 "is_configured": true, 00:21:10.556 "data_offset": 2048, 00:21:10.556 "data_size": 63488 00:21:10.556 }, 00:21:10.556 { 00:21:10.556 "name": "BaseBdev2", 00:21:10.556 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:10.556 "is_configured": true, 00:21:10.556 "data_offset": 2048, 00:21:10.556 "data_size": 63488 00:21:10.556 }, 00:21:10.556 { 00:21:10.556 "name": "BaseBdev3", 00:21:10.556 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:10.556 "is_configured": true, 00:21:10.556 "data_offset": 2048, 00:21:10.556 "data_size": 63488 00:21:10.556 } 00:21:10.556 ] 00:21:10.556 }' 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.556 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.815 "name": "raid_bdev1", 00:21:10.815 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:10.815 "strip_size_kb": 64, 00:21:10.815 "state": "online", 00:21:10.815 "raid_level": "raid5f", 00:21:10.815 "superblock": true, 00:21:10.815 "num_base_bdevs": 3, 00:21:10.815 "num_base_bdevs_discovered": 3, 00:21:10.815 "num_base_bdevs_operational": 3, 00:21:10.815 "base_bdevs_list": [ 00:21:10.815 { 00:21:10.815 "name": "spare", 00:21:10.815 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:10.815 "is_configured": true, 00:21:10.815 "data_offset": 2048, 00:21:10.815 "data_size": 63488 00:21:10.815 }, 00:21:10.815 { 00:21:10.815 "name": "BaseBdev2", 00:21:10.815 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:10.815 "is_configured": true, 00:21:10.815 "data_offset": 2048, 00:21:10.815 "data_size": 63488 00:21:10.815 }, 00:21:10.815 { 00:21:10.815 "name": "BaseBdev3", 00:21:10.815 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:10.815 "is_configured": true, 00:21:10.815 "data_offset": 2048, 00:21:10.815 "data_size": 63488 00:21:10.815 } 00:21:10.815 ] 00:21:10.815 }' 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.815 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.383 [2024-11-27 04:42:58.708788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:11.383 [2024-11-27 04:42:58.708951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:11.383 [2024-11-27 04:42:58.709172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.383 [2024-11-27 04:42:58.709290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.383 [2024-11-27 04:42:58.709316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:11.383 04:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:11.642 /dev/nbd0 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:11.642 1+0 records in 00:21:11.642 1+0 records out 00:21:11.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323968 s, 12.6 MB/s 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:11.642 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:11.900 /dev/nbd1 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:11.900 1+0 records in 00:21:11.900 1+0 records out 00:21:11.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432814 s, 9.5 MB/s 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:11.900 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:12.157 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:12.157 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:12.157 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:12.157 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:12.157 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:12.157 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:12.157 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:12.415 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:12.415 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:12.415 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:12.415 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:12.415 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:12.415 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:12.415 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:12.415 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:12.415 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:12.415 04:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:12.672 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:12.672 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:12.672 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:12.672 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:12.672 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:12.672 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:12.672 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.673 [2024-11-27 04:43:00.242463] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:12.673 [2024-11-27 04:43:00.242544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.673 [2024-11-27 04:43:00.242579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:12.673 [2024-11-27 04:43:00.242607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.673 [2024-11-27 04:43:00.245481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.673 [2024-11-27 04:43:00.245531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:12.673 [2024-11-27 04:43:00.245644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:12.673 [2024-11-27 04:43:00.245713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:12.673 [2024-11-27 04:43:00.245901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:12.673 [2024-11-27 04:43:00.246053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:12.673 spare 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.673 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.931 [2024-11-27 04:43:00.346207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:12.931 [2024-11-27 04:43:00.346279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:12.931 [2024-11-27 04:43:00.346707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:21:12.931 [2024-11-27 04:43:00.351599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:12.931 [2024-11-27 04:43:00.351630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:12.931 [2024-11-27 04:43:00.351915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.931 "name": "raid_bdev1", 00:21:12.931 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:12.931 "strip_size_kb": 64, 00:21:12.931 "state": "online", 00:21:12.931 "raid_level": "raid5f", 00:21:12.931 "superblock": true, 00:21:12.931 "num_base_bdevs": 3, 00:21:12.931 "num_base_bdevs_discovered": 3, 00:21:12.931 "num_base_bdevs_operational": 3, 00:21:12.931 "base_bdevs_list": [ 00:21:12.931 { 00:21:12.931 "name": "spare", 00:21:12.931 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:12.931 "is_configured": true, 00:21:12.931 "data_offset": 2048, 00:21:12.931 "data_size": 63488 00:21:12.931 }, 00:21:12.931 { 00:21:12.931 "name": "BaseBdev2", 00:21:12.931 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:12.931 "is_configured": true, 00:21:12.931 "data_offset": 2048, 00:21:12.931 "data_size": 63488 00:21:12.931 }, 00:21:12.931 { 00:21:12.931 "name": "BaseBdev3", 00:21:12.931 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:12.931 "is_configured": true, 00:21:12.931 "data_offset": 2048, 00:21:12.931 "data_size": 63488 00:21:12.931 } 00:21:12.931 ] 00:21:12.931 }' 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.931 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.498 "name": "raid_bdev1", 00:21:13.498 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:13.498 "strip_size_kb": 64, 00:21:13.498 "state": "online", 00:21:13.498 "raid_level": "raid5f", 00:21:13.498 "superblock": true, 00:21:13.498 "num_base_bdevs": 3, 00:21:13.498 "num_base_bdevs_discovered": 3, 00:21:13.498 "num_base_bdevs_operational": 3, 00:21:13.498 "base_bdevs_list": [ 00:21:13.498 { 00:21:13.498 "name": "spare", 00:21:13.498 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:13.498 "is_configured": true, 00:21:13.498 "data_offset": 2048, 00:21:13.498 "data_size": 63488 00:21:13.498 }, 00:21:13.498 { 00:21:13.498 "name": "BaseBdev2", 00:21:13.498 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:13.498 "is_configured": true, 00:21:13.498 "data_offset": 2048, 00:21:13.498 "data_size": 63488 00:21:13.498 }, 00:21:13.498 { 00:21:13.498 "name": "BaseBdev3", 00:21:13.498 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:13.498 "is_configured": true, 00:21:13.498 "data_offset": 2048, 00:21:13.498 "data_size": 63488 00:21:13.498 } 00:21:13.498 ] 00:21:13.498 }' 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:13.498 04:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.498 [2024-11-27 04:43:01.053639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.498 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.499 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.499 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.499 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.499 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.499 "name": "raid_bdev1", 00:21:13.499 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:13.499 "strip_size_kb": 64, 00:21:13.499 "state": "online", 00:21:13.499 "raid_level": "raid5f", 00:21:13.499 "superblock": true, 00:21:13.499 "num_base_bdevs": 3, 00:21:13.499 "num_base_bdevs_discovered": 2, 00:21:13.499 "num_base_bdevs_operational": 2, 00:21:13.499 "base_bdevs_list": [ 00:21:13.499 { 00:21:13.499 "name": null, 00:21:13.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.499 "is_configured": false, 00:21:13.499 "data_offset": 0, 00:21:13.499 "data_size": 63488 00:21:13.499 }, 00:21:13.499 { 00:21:13.499 "name": "BaseBdev2", 00:21:13.499 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:13.499 "is_configured": true, 00:21:13.499 "data_offset": 2048, 00:21:13.499 "data_size": 63488 00:21:13.499 }, 00:21:13.499 { 00:21:13.499 "name": "BaseBdev3", 00:21:13.499 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:13.499 "is_configured": true, 00:21:13.499 "data_offset": 2048, 00:21:13.499 "data_size": 63488 00:21:13.499 } 00:21:13.499 ] 00:21:13.499 }' 00:21:13.499 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.499 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.066 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:14.066 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.066 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.066 [2024-11-27 04:43:01.549838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:14.066 [2024-11-27 04:43:01.550088] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:14.066 [2024-11-27 04:43:01.550127] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:14.066 [2024-11-27 04:43:01.550178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:14.066 [2024-11-27 04:43:01.564571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:21:14.066 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.066 04:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:14.066 [2024-11-27 04:43:01.571757] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:15.003 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.003 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.003 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.003 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.003 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.003 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.003 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.003 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.003 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.003 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.261 "name": "raid_bdev1", 00:21:15.261 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:15.261 "strip_size_kb": 64, 00:21:15.261 "state": "online", 00:21:15.261 "raid_level": "raid5f", 00:21:15.261 "superblock": true, 00:21:15.261 "num_base_bdevs": 3, 00:21:15.261 "num_base_bdevs_discovered": 3, 00:21:15.261 "num_base_bdevs_operational": 3, 00:21:15.261 "process": { 00:21:15.261 "type": "rebuild", 00:21:15.261 "target": "spare", 00:21:15.261 "progress": { 00:21:15.261 "blocks": 18432, 00:21:15.261 "percent": 14 00:21:15.261 } 00:21:15.261 }, 00:21:15.261 "base_bdevs_list": [ 00:21:15.261 { 00:21:15.261 "name": "spare", 00:21:15.261 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:15.261 "is_configured": true, 00:21:15.261 "data_offset": 2048, 00:21:15.261 "data_size": 63488 00:21:15.261 }, 00:21:15.261 { 00:21:15.261 "name": "BaseBdev2", 00:21:15.261 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:15.261 "is_configured": true, 00:21:15.261 "data_offset": 2048, 00:21:15.261 "data_size": 63488 00:21:15.261 }, 00:21:15.261 { 00:21:15.261 "name": "BaseBdev3", 00:21:15.261 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:15.261 "is_configured": true, 00:21:15.261 "data_offset": 2048, 00:21:15.261 "data_size": 63488 00:21:15.261 } 00:21:15.261 ] 00:21:15.261 }' 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.261 [2024-11-27 04:43:02.741914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:15.261 [2024-11-27 04:43:02.787500] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:15.261 [2024-11-27 04:43:02.787603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.261 [2024-11-27 04:43:02.787629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:15.261 [2024-11-27 04:43:02.787646] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.261 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.262 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.262 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.262 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.262 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.262 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.262 "name": "raid_bdev1", 00:21:15.262 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:15.262 "strip_size_kb": 64, 00:21:15.262 "state": "online", 00:21:15.262 "raid_level": "raid5f", 00:21:15.262 "superblock": true, 00:21:15.262 "num_base_bdevs": 3, 00:21:15.262 "num_base_bdevs_discovered": 2, 00:21:15.262 "num_base_bdevs_operational": 2, 00:21:15.262 "base_bdevs_list": [ 00:21:15.262 { 00:21:15.262 "name": null, 00:21:15.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.262 "is_configured": false, 00:21:15.262 "data_offset": 0, 00:21:15.262 "data_size": 63488 00:21:15.262 }, 00:21:15.262 { 00:21:15.262 "name": "BaseBdev2", 00:21:15.262 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:15.262 "is_configured": true, 00:21:15.262 "data_offset": 2048, 00:21:15.262 "data_size": 63488 00:21:15.262 }, 00:21:15.262 { 00:21:15.262 "name": "BaseBdev3", 00:21:15.262 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:15.262 "is_configured": true, 00:21:15.262 "data_offset": 2048, 00:21:15.262 "data_size": 63488 00:21:15.262 } 00:21:15.262 ] 00:21:15.262 }' 00:21:15.262 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.262 04:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.828 04:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:15.828 04:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.828 04:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.828 [2024-11-27 04:43:03.370526] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:15.828 [2024-11-27 04:43:03.370614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.828 [2024-11-27 04:43:03.370662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:15.828 [2024-11-27 04:43:03.370684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.828 [2024-11-27 04:43:03.371349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.828 [2024-11-27 04:43:03.371417] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:15.828 [2024-11-27 04:43:03.371579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:15.828 [2024-11-27 04:43:03.371610] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:15.828 [2024-11-27 04:43:03.371625] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:15.828 [2024-11-27 04:43:03.371659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:15.828 [2024-11-27 04:43:03.386502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:21:15.828 spare 00:21:15.828 04:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.828 04:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:15.828 [2024-11-27 04:43:03.393829] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.205 "name": "raid_bdev1", 00:21:17.205 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:17.205 "strip_size_kb": 64, 00:21:17.205 "state": "online", 00:21:17.205 "raid_level": "raid5f", 00:21:17.205 "superblock": true, 00:21:17.205 "num_base_bdevs": 3, 00:21:17.205 "num_base_bdevs_discovered": 3, 00:21:17.205 "num_base_bdevs_operational": 3, 00:21:17.205 "process": { 00:21:17.205 "type": "rebuild", 00:21:17.205 "target": "spare", 00:21:17.205 "progress": { 00:21:17.205 "blocks": 18432, 00:21:17.205 "percent": 14 00:21:17.205 } 00:21:17.205 }, 00:21:17.205 "base_bdevs_list": [ 00:21:17.205 { 00:21:17.205 "name": "spare", 00:21:17.205 "uuid": "c9442be7-8743-5d8f-bfb7-ca7bac3d5a7b", 00:21:17.205 "is_configured": true, 00:21:17.205 "data_offset": 2048, 00:21:17.205 "data_size": 63488 00:21:17.205 }, 00:21:17.205 { 00:21:17.205 "name": "BaseBdev2", 00:21:17.205 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:17.205 "is_configured": true, 00:21:17.205 "data_offset": 2048, 00:21:17.205 "data_size": 63488 00:21:17.205 }, 00:21:17.205 { 00:21:17.205 "name": "BaseBdev3", 00:21:17.205 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:17.205 "is_configured": true, 00:21:17.205 "data_offset": 2048, 00:21:17.205 "data_size": 63488 00:21:17.205 } 00:21:17.205 ] 00:21:17.205 }' 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.205 [2024-11-27 04:43:04.568043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:17.205 [2024-11-27 04:43:04.609188] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:17.205 [2024-11-27 04:43:04.609291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.205 [2024-11-27 04:43:04.609321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:17.205 [2024-11-27 04:43:04.609333] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.205 "name": "raid_bdev1", 00:21:17.205 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:17.205 "strip_size_kb": 64, 00:21:17.205 "state": "online", 00:21:17.205 "raid_level": "raid5f", 00:21:17.205 "superblock": true, 00:21:17.205 "num_base_bdevs": 3, 00:21:17.205 "num_base_bdevs_discovered": 2, 00:21:17.205 "num_base_bdevs_operational": 2, 00:21:17.205 "base_bdevs_list": [ 00:21:17.205 { 00:21:17.205 "name": null, 00:21:17.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.205 "is_configured": false, 00:21:17.205 "data_offset": 0, 00:21:17.205 "data_size": 63488 00:21:17.205 }, 00:21:17.205 { 00:21:17.205 "name": "BaseBdev2", 00:21:17.205 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:17.205 "is_configured": true, 00:21:17.205 "data_offset": 2048, 00:21:17.205 "data_size": 63488 00:21:17.205 }, 00:21:17.205 { 00:21:17.205 "name": "BaseBdev3", 00:21:17.205 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:17.205 "is_configured": true, 00:21:17.205 "data_offset": 2048, 00:21:17.205 "data_size": 63488 00:21:17.205 } 00:21:17.205 ] 00:21:17.205 }' 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.205 04:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.772 "name": "raid_bdev1", 00:21:17.772 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:17.772 "strip_size_kb": 64, 00:21:17.772 "state": "online", 00:21:17.772 "raid_level": "raid5f", 00:21:17.772 "superblock": true, 00:21:17.772 "num_base_bdevs": 3, 00:21:17.772 "num_base_bdevs_discovered": 2, 00:21:17.772 "num_base_bdevs_operational": 2, 00:21:17.772 "base_bdevs_list": [ 00:21:17.772 { 00:21:17.772 "name": null, 00:21:17.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.772 "is_configured": false, 00:21:17.772 "data_offset": 0, 00:21:17.772 "data_size": 63488 00:21:17.772 }, 00:21:17.772 { 00:21:17.772 "name": "BaseBdev2", 00:21:17.772 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:17.772 "is_configured": true, 00:21:17.772 "data_offset": 2048, 00:21:17.772 "data_size": 63488 00:21:17.772 }, 00:21:17.772 { 00:21:17.772 "name": "BaseBdev3", 00:21:17.772 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:17.772 "is_configured": true, 00:21:17.772 "data_offset": 2048, 00:21:17.772 "data_size": 63488 00:21:17.772 } 00:21:17.772 ] 00:21:17.772 }' 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.772 [2024-11-27 04:43:05.344504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:17.772 [2024-11-27 04:43:05.344571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.772 [2024-11-27 04:43:05.344607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:17.772 [2024-11-27 04:43:05.344622] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.772 [2024-11-27 04:43:05.345258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.772 [2024-11-27 04:43:05.345301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:17.772 [2024-11-27 04:43:05.345409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:17.772 [2024-11-27 04:43:05.345435] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:17.772 [2024-11-27 04:43:05.345471] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:17.772 [2024-11-27 04:43:05.345486] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:17.772 BaseBdev1 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.772 04:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.153 "name": "raid_bdev1", 00:21:19.153 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:19.153 "strip_size_kb": 64, 00:21:19.153 "state": "online", 00:21:19.153 "raid_level": "raid5f", 00:21:19.153 "superblock": true, 00:21:19.153 "num_base_bdevs": 3, 00:21:19.153 "num_base_bdevs_discovered": 2, 00:21:19.153 "num_base_bdevs_operational": 2, 00:21:19.153 "base_bdevs_list": [ 00:21:19.153 { 00:21:19.153 "name": null, 00:21:19.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.153 "is_configured": false, 00:21:19.153 "data_offset": 0, 00:21:19.153 "data_size": 63488 00:21:19.153 }, 00:21:19.153 { 00:21:19.153 "name": "BaseBdev2", 00:21:19.153 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:19.153 "is_configured": true, 00:21:19.153 "data_offset": 2048, 00:21:19.153 "data_size": 63488 00:21:19.153 }, 00:21:19.153 { 00:21:19.153 "name": "BaseBdev3", 00:21:19.153 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:19.153 "is_configured": true, 00:21:19.153 "data_offset": 2048, 00:21:19.153 "data_size": 63488 00:21:19.153 } 00:21:19.153 ] 00:21:19.153 }' 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.153 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.413 "name": "raid_bdev1", 00:21:19.413 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:19.413 "strip_size_kb": 64, 00:21:19.413 "state": "online", 00:21:19.413 "raid_level": "raid5f", 00:21:19.413 "superblock": true, 00:21:19.413 "num_base_bdevs": 3, 00:21:19.413 "num_base_bdevs_discovered": 2, 00:21:19.413 "num_base_bdevs_operational": 2, 00:21:19.413 "base_bdevs_list": [ 00:21:19.413 { 00:21:19.413 "name": null, 00:21:19.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.413 "is_configured": false, 00:21:19.413 "data_offset": 0, 00:21:19.413 "data_size": 63488 00:21:19.413 }, 00:21:19.413 { 00:21:19.413 "name": "BaseBdev2", 00:21:19.413 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:19.413 "is_configured": true, 00:21:19.413 "data_offset": 2048, 00:21:19.413 "data_size": 63488 00:21:19.413 }, 00:21:19.413 { 00:21:19.413 "name": "BaseBdev3", 00:21:19.413 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:19.413 "is_configured": true, 00:21:19.413 "data_offset": 2048, 00:21:19.413 "data_size": 63488 00:21:19.413 } 00:21:19.413 ] 00:21:19.413 }' 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:19.413 04:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.672 [2024-11-27 04:43:07.041094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:19.672 [2024-11-27 04:43:07.041314] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:19.672 [2024-11-27 04:43:07.041340] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:19.672 request: 00:21:19.672 { 00:21:19.672 "base_bdev": "BaseBdev1", 00:21:19.672 "raid_bdev": "raid_bdev1", 00:21:19.672 "method": "bdev_raid_add_base_bdev", 00:21:19.672 "req_id": 1 00:21:19.672 } 00:21:19.672 Got JSON-RPC error response 00:21:19.672 response: 00:21:19.672 { 00:21:19.672 "code": -22, 00:21:19.672 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:19.672 } 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:19.672 04:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.607 "name": "raid_bdev1", 00:21:20.607 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:20.607 "strip_size_kb": 64, 00:21:20.607 "state": "online", 00:21:20.607 "raid_level": "raid5f", 00:21:20.607 "superblock": true, 00:21:20.607 "num_base_bdevs": 3, 00:21:20.607 "num_base_bdevs_discovered": 2, 00:21:20.607 "num_base_bdevs_operational": 2, 00:21:20.607 "base_bdevs_list": [ 00:21:20.607 { 00:21:20.607 "name": null, 00:21:20.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.607 "is_configured": false, 00:21:20.607 "data_offset": 0, 00:21:20.607 "data_size": 63488 00:21:20.607 }, 00:21:20.607 { 00:21:20.607 "name": "BaseBdev2", 00:21:20.607 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:20.607 "is_configured": true, 00:21:20.607 "data_offset": 2048, 00:21:20.607 "data_size": 63488 00:21:20.607 }, 00:21:20.607 { 00:21:20.607 "name": "BaseBdev3", 00:21:20.607 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:20.607 "is_configured": true, 00:21:20.607 "data_offset": 2048, 00:21:20.607 "data_size": 63488 00:21:20.607 } 00:21:20.607 ] 00:21:20.607 }' 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.607 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.175 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.175 "name": "raid_bdev1", 00:21:21.175 "uuid": "42c3159d-81ed-4d97-9bd3-e0c6f979732e", 00:21:21.175 "strip_size_kb": 64, 00:21:21.175 "state": "online", 00:21:21.175 "raid_level": "raid5f", 00:21:21.176 "superblock": true, 00:21:21.176 "num_base_bdevs": 3, 00:21:21.176 "num_base_bdevs_discovered": 2, 00:21:21.176 "num_base_bdevs_operational": 2, 00:21:21.176 "base_bdevs_list": [ 00:21:21.176 { 00:21:21.176 "name": null, 00:21:21.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.176 "is_configured": false, 00:21:21.176 "data_offset": 0, 00:21:21.176 "data_size": 63488 00:21:21.176 }, 00:21:21.176 { 00:21:21.176 "name": "BaseBdev2", 00:21:21.176 "uuid": "955bf989-33cc-5d63-b893-30f216c760d0", 00:21:21.176 "is_configured": true, 00:21:21.176 "data_offset": 2048, 00:21:21.176 "data_size": 63488 00:21:21.176 }, 00:21:21.176 { 00:21:21.176 "name": "BaseBdev3", 00:21:21.176 "uuid": "e28a92be-b8fe-52d9-8370-921c2c1051f5", 00:21:21.176 "is_configured": true, 00:21:21.176 "data_offset": 2048, 00:21:21.176 "data_size": 63488 00:21:21.176 } 00:21:21.176 ] 00:21:21.176 }' 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82543 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82543 ']' 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82543 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82543 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.176 killing process with pid 82543 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82543' 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82543 00:21:21.176 Received shutdown signal, test time was about 60.000000 seconds 00:21:21.176 00:21:21.176 Latency(us) 00:21:21.176 [2024-11-27T04:43:08.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.176 [2024-11-27T04:43:08.799Z] =================================================================================================================== 00:21:21.176 [2024-11-27T04:43:08.799Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.176 [2024-11-27 04:43:08.790064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:21.176 04:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82543 00:21:21.176 [2024-11-27 04:43:08.790227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.176 [2024-11-27 04:43:08.790316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.176 [2024-11-27 04:43:08.790348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:21.742 [2024-11-27 04:43:09.145017] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:22.678 04:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:22.678 00:21:22.678 real 0m24.978s 00:21:22.678 user 0m33.576s 00:21:22.678 sys 0m2.419s 00:21:22.678 04:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.678 04:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.678 ************************************ 00:21:22.678 END TEST raid5f_rebuild_test_sb 00:21:22.678 ************************************ 00:21:22.678 04:43:10 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:21:22.678 04:43:10 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:21:22.678 04:43:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:22.678 04:43:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.678 04:43:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:22.678 ************************************ 00:21:22.678 START TEST raid5f_state_function_test 00:21:22.678 ************************************ 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83308 00:21:22.678 Process raid pid: 83308 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83308' 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83308 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83308 ']' 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.678 04:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.679 04:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.937 [2024-11-27 04:43:10.366944] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:21:22.937 [2024-11-27 04:43:10.367127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.937 [2024-11-27 04:43:10.556253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.195 [2024-11-27 04:43:10.713738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.453 [2024-11-27 04:43:10.965095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.453 [2024-11-27 04:43:10.965170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.713 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.713 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:23.713 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:23.713 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.713 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.972 [2024-11-27 04:43:11.337689] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:23.972 [2024-11-27 04:43:11.337759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:23.972 [2024-11-27 04:43:11.337799] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:23.972 [2024-11-27 04:43:11.337827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:23.972 [2024-11-27 04:43:11.337843] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:23.972 [2024-11-27 04:43:11.337865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:23.972 [2024-11-27 04:43:11.337880] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:23.972 [2024-11-27 04:43:11.337901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.972 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.972 "name": "Existed_Raid", 00:21:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.972 "strip_size_kb": 64, 00:21:23.972 "state": "configuring", 00:21:23.972 "raid_level": "raid5f", 00:21:23.972 "superblock": false, 00:21:23.972 "num_base_bdevs": 4, 00:21:23.972 "num_base_bdevs_discovered": 0, 00:21:23.972 "num_base_bdevs_operational": 4, 00:21:23.972 "base_bdevs_list": [ 00:21:23.972 { 00:21:23.972 "name": "BaseBdev1", 00:21:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.972 "is_configured": false, 00:21:23.972 "data_offset": 0, 00:21:23.972 "data_size": 0 00:21:23.972 }, 00:21:23.972 { 00:21:23.972 "name": "BaseBdev2", 00:21:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.972 "is_configured": false, 00:21:23.972 "data_offset": 0, 00:21:23.972 "data_size": 0 00:21:23.972 }, 00:21:23.972 { 00:21:23.972 "name": "BaseBdev3", 00:21:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.972 "is_configured": false, 00:21:23.973 "data_offset": 0, 00:21:23.973 "data_size": 0 00:21:23.973 }, 00:21:23.973 { 00:21:23.973 "name": "BaseBdev4", 00:21:23.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.973 "is_configured": false, 00:21:23.973 "data_offset": 0, 00:21:23.973 "data_size": 0 00:21:23.973 } 00:21:23.973 ] 00:21:23.973 }' 00:21:23.973 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.973 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.539 [2024-11-27 04:43:11.857761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:24.539 [2024-11-27 04:43:11.857830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.539 [2024-11-27 04:43:11.865728] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:24.539 [2024-11-27 04:43:11.865805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:24.539 [2024-11-27 04:43:11.865829] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:24.539 [2024-11-27 04:43:11.865853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:24.539 [2024-11-27 04:43:11.865867] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:24.539 [2024-11-27 04:43:11.865888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:24.539 [2024-11-27 04:43:11.865903] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:24.539 [2024-11-27 04:43:11.865926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.539 [2024-11-27 04:43:11.910979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:24.539 BaseBdev1 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.539 [ 00:21:24.539 { 00:21:24.539 "name": "BaseBdev1", 00:21:24.539 "aliases": [ 00:21:24.539 "7720c965-8dd8-4b78-bc10-fdc32843bdd2" 00:21:24.539 ], 00:21:24.539 "product_name": "Malloc disk", 00:21:24.539 "block_size": 512, 00:21:24.539 "num_blocks": 65536, 00:21:24.539 "uuid": "7720c965-8dd8-4b78-bc10-fdc32843bdd2", 00:21:24.539 "assigned_rate_limits": { 00:21:24.539 "rw_ios_per_sec": 0, 00:21:24.539 "rw_mbytes_per_sec": 0, 00:21:24.539 "r_mbytes_per_sec": 0, 00:21:24.539 "w_mbytes_per_sec": 0 00:21:24.539 }, 00:21:24.539 "claimed": true, 00:21:24.539 "claim_type": "exclusive_write", 00:21:24.539 "zoned": false, 00:21:24.539 "supported_io_types": { 00:21:24.539 "read": true, 00:21:24.539 "write": true, 00:21:24.539 "unmap": true, 00:21:24.539 "flush": true, 00:21:24.539 "reset": true, 00:21:24.539 "nvme_admin": false, 00:21:24.539 "nvme_io": false, 00:21:24.539 "nvme_io_md": false, 00:21:24.539 "write_zeroes": true, 00:21:24.539 "zcopy": true, 00:21:24.539 "get_zone_info": false, 00:21:24.539 "zone_management": false, 00:21:24.539 "zone_append": false, 00:21:24.539 "compare": false, 00:21:24.539 "compare_and_write": false, 00:21:24.539 "abort": true, 00:21:24.539 "seek_hole": false, 00:21:24.539 "seek_data": false, 00:21:24.539 "copy": true, 00:21:24.539 "nvme_iov_md": false 00:21:24.539 }, 00:21:24.539 "memory_domains": [ 00:21:24.539 { 00:21:24.539 "dma_device_id": "system", 00:21:24.539 "dma_device_type": 1 00:21:24.539 }, 00:21:24.539 { 00:21:24.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.539 "dma_device_type": 2 00:21:24.539 } 00:21:24.539 ], 00:21:24.539 "driver_specific": {} 00:21:24.539 } 00:21:24.539 ] 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.539 04:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.539 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.539 "name": "Existed_Raid", 00:21:24.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.539 "strip_size_kb": 64, 00:21:24.539 "state": "configuring", 00:21:24.540 "raid_level": "raid5f", 00:21:24.540 "superblock": false, 00:21:24.540 "num_base_bdevs": 4, 00:21:24.540 "num_base_bdevs_discovered": 1, 00:21:24.540 "num_base_bdevs_operational": 4, 00:21:24.540 "base_bdevs_list": [ 00:21:24.540 { 00:21:24.540 "name": "BaseBdev1", 00:21:24.540 "uuid": "7720c965-8dd8-4b78-bc10-fdc32843bdd2", 00:21:24.540 "is_configured": true, 00:21:24.540 "data_offset": 0, 00:21:24.540 "data_size": 65536 00:21:24.540 }, 00:21:24.540 { 00:21:24.540 "name": "BaseBdev2", 00:21:24.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.540 "is_configured": false, 00:21:24.540 "data_offset": 0, 00:21:24.540 "data_size": 0 00:21:24.540 }, 00:21:24.540 { 00:21:24.540 "name": "BaseBdev3", 00:21:24.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.540 "is_configured": false, 00:21:24.540 "data_offset": 0, 00:21:24.540 "data_size": 0 00:21:24.540 }, 00:21:24.540 { 00:21:24.540 "name": "BaseBdev4", 00:21:24.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.540 "is_configured": false, 00:21:24.540 "data_offset": 0, 00:21:24.540 "data_size": 0 00:21:24.540 } 00:21:24.540 ] 00:21:24.540 }' 00:21:24.540 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.540 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.107 [2024-11-27 04:43:12.479173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:25.107 [2024-11-27 04:43:12.479388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.107 [2024-11-27 04:43:12.487224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.107 [2024-11-27 04:43:12.489610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.107 [2024-11-27 04:43:12.489672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.107 [2024-11-27 04:43:12.489697] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:25.107 [2024-11-27 04:43:12.489724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:25.107 [2024-11-27 04:43:12.489741] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:25.107 [2024-11-27 04:43:12.489763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.107 "name": "Existed_Raid", 00:21:25.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.107 "strip_size_kb": 64, 00:21:25.107 "state": "configuring", 00:21:25.107 "raid_level": "raid5f", 00:21:25.107 "superblock": false, 00:21:25.107 "num_base_bdevs": 4, 00:21:25.107 "num_base_bdevs_discovered": 1, 00:21:25.107 "num_base_bdevs_operational": 4, 00:21:25.107 "base_bdevs_list": [ 00:21:25.107 { 00:21:25.107 "name": "BaseBdev1", 00:21:25.107 "uuid": "7720c965-8dd8-4b78-bc10-fdc32843bdd2", 00:21:25.107 "is_configured": true, 00:21:25.107 "data_offset": 0, 00:21:25.107 "data_size": 65536 00:21:25.107 }, 00:21:25.107 { 00:21:25.107 "name": "BaseBdev2", 00:21:25.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.107 "is_configured": false, 00:21:25.107 "data_offset": 0, 00:21:25.107 "data_size": 0 00:21:25.107 }, 00:21:25.107 { 00:21:25.107 "name": "BaseBdev3", 00:21:25.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.107 "is_configured": false, 00:21:25.107 "data_offset": 0, 00:21:25.107 "data_size": 0 00:21:25.107 }, 00:21:25.107 { 00:21:25.107 "name": "BaseBdev4", 00:21:25.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.107 "is_configured": false, 00:21:25.107 "data_offset": 0, 00:21:25.107 "data_size": 0 00:21:25.107 } 00:21:25.107 ] 00:21:25.107 }' 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.107 04:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.674 [2024-11-27 04:43:13.046004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:25.674 BaseBdev2 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.674 [ 00:21:25.674 { 00:21:25.674 "name": "BaseBdev2", 00:21:25.674 "aliases": [ 00:21:25.674 "a8d3f398-9a96-4b58-b1b7-98b555c9f6de" 00:21:25.674 ], 00:21:25.674 "product_name": "Malloc disk", 00:21:25.674 "block_size": 512, 00:21:25.674 "num_blocks": 65536, 00:21:25.674 "uuid": "a8d3f398-9a96-4b58-b1b7-98b555c9f6de", 00:21:25.674 "assigned_rate_limits": { 00:21:25.674 "rw_ios_per_sec": 0, 00:21:25.674 "rw_mbytes_per_sec": 0, 00:21:25.674 "r_mbytes_per_sec": 0, 00:21:25.674 "w_mbytes_per_sec": 0 00:21:25.674 }, 00:21:25.674 "claimed": true, 00:21:25.674 "claim_type": "exclusive_write", 00:21:25.674 "zoned": false, 00:21:25.674 "supported_io_types": { 00:21:25.674 "read": true, 00:21:25.674 "write": true, 00:21:25.674 "unmap": true, 00:21:25.674 "flush": true, 00:21:25.674 "reset": true, 00:21:25.674 "nvme_admin": false, 00:21:25.674 "nvme_io": false, 00:21:25.674 "nvme_io_md": false, 00:21:25.674 "write_zeroes": true, 00:21:25.674 "zcopy": true, 00:21:25.674 "get_zone_info": false, 00:21:25.674 "zone_management": false, 00:21:25.674 "zone_append": false, 00:21:25.674 "compare": false, 00:21:25.674 "compare_and_write": false, 00:21:25.674 "abort": true, 00:21:25.674 "seek_hole": false, 00:21:25.674 "seek_data": false, 00:21:25.674 "copy": true, 00:21:25.674 "nvme_iov_md": false 00:21:25.674 }, 00:21:25.674 "memory_domains": [ 00:21:25.674 { 00:21:25.674 "dma_device_id": "system", 00:21:25.674 "dma_device_type": 1 00:21:25.674 }, 00:21:25.674 { 00:21:25.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.674 "dma_device_type": 2 00:21:25.674 } 00:21:25.674 ], 00:21:25.674 "driver_specific": {} 00:21:25.674 } 00:21:25.674 ] 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.674 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.674 "name": "Existed_Raid", 00:21:25.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.674 "strip_size_kb": 64, 00:21:25.674 "state": "configuring", 00:21:25.674 "raid_level": "raid5f", 00:21:25.674 "superblock": false, 00:21:25.674 "num_base_bdevs": 4, 00:21:25.674 "num_base_bdevs_discovered": 2, 00:21:25.674 "num_base_bdevs_operational": 4, 00:21:25.674 "base_bdevs_list": [ 00:21:25.674 { 00:21:25.674 "name": "BaseBdev1", 00:21:25.674 "uuid": "7720c965-8dd8-4b78-bc10-fdc32843bdd2", 00:21:25.674 "is_configured": true, 00:21:25.674 "data_offset": 0, 00:21:25.674 "data_size": 65536 00:21:25.674 }, 00:21:25.674 { 00:21:25.674 "name": "BaseBdev2", 00:21:25.674 "uuid": "a8d3f398-9a96-4b58-b1b7-98b555c9f6de", 00:21:25.674 "is_configured": true, 00:21:25.674 "data_offset": 0, 00:21:25.674 "data_size": 65536 00:21:25.674 }, 00:21:25.674 { 00:21:25.674 "name": "BaseBdev3", 00:21:25.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.674 "is_configured": false, 00:21:25.674 "data_offset": 0, 00:21:25.674 "data_size": 0 00:21:25.674 }, 00:21:25.674 { 00:21:25.674 "name": "BaseBdev4", 00:21:25.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.675 "is_configured": false, 00:21:25.675 "data_offset": 0, 00:21:25.675 "data_size": 0 00:21:25.675 } 00:21:25.675 ] 00:21:25.675 }' 00:21:25.675 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.675 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.241 [2024-11-27 04:43:13.694201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.241 BaseBdev3 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.241 [ 00:21:26.241 { 00:21:26.241 "name": "BaseBdev3", 00:21:26.241 "aliases": [ 00:21:26.241 "0d055059-8a8f-4738-93ad-944b382fae35" 00:21:26.241 ], 00:21:26.241 "product_name": "Malloc disk", 00:21:26.241 "block_size": 512, 00:21:26.241 "num_blocks": 65536, 00:21:26.241 "uuid": "0d055059-8a8f-4738-93ad-944b382fae35", 00:21:26.241 "assigned_rate_limits": { 00:21:26.241 "rw_ios_per_sec": 0, 00:21:26.241 "rw_mbytes_per_sec": 0, 00:21:26.241 "r_mbytes_per_sec": 0, 00:21:26.241 "w_mbytes_per_sec": 0 00:21:26.241 }, 00:21:26.241 "claimed": true, 00:21:26.241 "claim_type": "exclusive_write", 00:21:26.241 "zoned": false, 00:21:26.241 "supported_io_types": { 00:21:26.241 "read": true, 00:21:26.241 "write": true, 00:21:26.241 "unmap": true, 00:21:26.241 "flush": true, 00:21:26.241 "reset": true, 00:21:26.241 "nvme_admin": false, 00:21:26.241 "nvme_io": false, 00:21:26.241 "nvme_io_md": false, 00:21:26.241 "write_zeroes": true, 00:21:26.241 "zcopy": true, 00:21:26.241 "get_zone_info": false, 00:21:26.241 "zone_management": false, 00:21:26.241 "zone_append": false, 00:21:26.241 "compare": false, 00:21:26.241 "compare_and_write": false, 00:21:26.241 "abort": true, 00:21:26.241 "seek_hole": false, 00:21:26.241 "seek_data": false, 00:21:26.241 "copy": true, 00:21:26.241 "nvme_iov_md": false 00:21:26.241 }, 00:21:26.241 "memory_domains": [ 00:21:26.241 { 00:21:26.241 "dma_device_id": "system", 00:21:26.241 "dma_device_type": 1 00:21:26.241 }, 00:21:26.241 { 00:21:26.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.241 "dma_device_type": 2 00:21:26.241 } 00:21:26.241 ], 00:21:26.241 "driver_specific": {} 00:21:26.241 } 00:21:26.241 ] 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.241 "name": "Existed_Raid", 00:21:26.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.241 "strip_size_kb": 64, 00:21:26.241 "state": "configuring", 00:21:26.241 "raid_level": "raid5f", 00:21:26.241 "superblock": false, 00:21:26.241 "num_base_bdevs": 4, 00:21:26.241 "num_base_bdevs_discovered": 3, 00:21:26.241 "num_base_bdevs_operational": 4, 00:21:26.241 "base_bdevs_list": [ 00:21:26.241 { 00:21:26.241 "name": "BaseBdev1", 00:21:26.241 "uuid": "7720c965-8dd8-4b78-bc10-fdc32843bdd2", 00:21:26.241 "is_configured": true, 00:21:26.241 "data_offset": 0, 00:21:26.241 "data_size": 65536 00:21:26.241 }, 00:21:26.241 { 00:21:26.241 "name": "BaseBdev2", 00:21:26.241 "uuid": "a8d3f398-9a96-4b58-b1b7-98b555c9f6de", 00:21:26.241 "is_configured": true, 00:21:26.241 "data_offset": 0, 00:21:26.241 "data_size": 65536 00:21:26.241 }, 00:21:26.241 { 00:21:26.241 "name": "BaseBdev3", 00:21:26.241 "uuid": "0d055059-8a8f-4738-93ad-944b382fae35", 00:21:26.241 "is_configured": true, 00:21:26.241 "data_offset": 0, 00:21:26.241 "data_size": 65536 00:21:26.241 }, 00:21:26.241 { 00:21:26.241 "name": "BaseBdev4", 00:21:26.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.241 "is_configured": false, 00:21:26.241 "data_offset": 0, 00:21:26.241 "data_size": 0 00:21:26.241 } 00:21:26.241 ] 00:21:26.241 }' 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.241 04:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.806 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:26.806 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.806 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.806 [2024-11-27 04:43:14.252946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:26.806 [2024-11-27 04:43:14.253240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:26.806 [2024-11-27 04:43:14.253296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:26.806 [2024-11-27 04:43:14.253760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:26.806 [2024-11-27 04:43:14.260805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:26.807 [2024-11-27 04:43:14.260952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:26.807 [2024-11-27 04:43:14.261474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.807 BaseBdev4 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.807 [ 00:21:26.807 { 00:21:26.807 "name": "BaseBdev4", 00:21:26.807 "aliases": [ 00:21:26.807 "f8803bef-fbb2-47f2-b0e3-29cf02b98777" 00:21:26.807 ], 00:21:26.807 "product_name": "Malloc disk", 00:21:26.807 "block_size": 512, 00:21:26.807 "num_blocks": 65536, 00:21:26.807 "uuid": "f8803bef-fbb2-47f2-b0e3-29cf02b98777", 00:21:26.807 "assigned_rate_limits": { 00:21:26.807 "rw_ios_per_sec": 0, 00:21:26.807 "rw_mbytes_per_sec": 0, 00:21:26.807 "r_mbytes_per_sec": 0, 00:21:26.807 "w_mbytes_per_sec": 0 00:21:26.807 }, 00:21:26.807 "claimed": true, 00:21:26.807 "claim_type": "exclusive_write", 00:21:26.807 "zoned": false, 00:21:26.807 "supported_io_types": { 00:21:26.807 "read": true, 00:21:26.807 "write": true, 00:21:26.807 "unmap": true, 00:21:26.807 "flush": true, 00:21:26.807 "reset": true, 00:21:26.807 "nvme_admin": false, 00:21:26.807 "nvme_io": false, 00:21:26.807 "nvme_io_md": false, 00:21:26.807 "write_zeroes": true, 00:21:26.807 "zcopy": true, 00:21:26.807 "get_zone_info": false, 00:21:26.807 "zone_management": false, 00:21:26.807 "zone_append": false, 00:21:26.807 "compare": false, 00:21:26.807 "compare_and_write": false, 00:21:26.807 "abort": true, 00:21:26.807 "seek_hole": false, 00:21:26.807 "seek_data": false, 00:21:26.807 "copy": true, 00:21:26.807 "nvme_iov_md": false 00:21:26.807 }, 00:21:26.807 "memory_domains": [ 00:21:26.807 { 00:21:26.807 "dma_device_id": "system", 00:21:26.807 "dma_device_type": 1 00:21:26.807 }, 00:21:26.807 { 00:21:26.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.807 "dma_device_type": 2 00:21:26.807 } 00:21:26.807 ], 00:21:26.807 "driver_specific": {} 00:21:26.807 } 00:21:26.807 ] 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.807 "name": "Existed_Raid", 00:21:26.807 "uuid": "17e5f4a6-4ca4-4961-8f54-b707594d00d9", 00:21:26.807 "strip_size_kb": 64, 00:21:26.807 "state": "online", 00:21:26.807 "raid_level": "raid5f", 00:21:26.807 "superblock": false, 00:21:26.807 "num_base_bdevs": 4, 00:21:26.807 "num_base_bdevs_discovered": 4, 00:21:26.807 "num_base_bdevs_operational": 4, 00:21:26.807 "base_bdevs_list": [ 00:21:26.807 { 00:21:26.807 "name": "BaseBdev1", 00:21:26.807 "uuid": "7720c965-8dd8-4b78-bc10-fdc32843bdd2", 00:21:26.807 "is_configured": true, 00:21:26.807 "data_offset": 0, 00:21:26.807 "data_size": 65536 00:21:26.807 }, 00:21:26.807 { 00:21:26.807 "name": "BaseBdev2", 00:21:26.807 "uuid": "a8d3f398-9a96-4b58-b1b7-98b555c9f6de", 00:21:26.807 "is_configured": true, 00:21:26.807 "data_offset": 0, 00:21:26.807 "data_size": 65536 00:21:26.807 }, 00:21:26.807 { 00:21:26.807 "name": "BaseBdev3", 00:21:26.807 "uuid": "0d055059-8a8f-4738-93ad-944b382fae35", 00:21:26.807 "is_configured": true, 00:21:26.807 "data_offset": 0, 00:21:26.807 "data_size": 65536 00:21:26.807 }, 00:21:26.807 { 00:21:26.807 "name": "BaseBdev4", 00:21:26.807 "uuid": "f8803bef-fbb2-47f2-b0e3-29cf02b98777", 00:21:26.807 "is_configured": true, 00:21:26.807 "data_offset": 0, 00:21:26.807 "data_size": 65536 00:21:26.807 } 00:21:26.807 ] 00:21:26.807 }' 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.807 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:27.373 [2024-11-27 04:43:14.801278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:27.373 "name": "Existed_Raid", 00:21:27.373 "aliases": [ 00:21:27.373 "17e5f4a6-4ca4-4961-8f54-b707594d00d9" 00:21:27.373 ], 00:21:27.373 "product_name": "Raid Volume", 00:21:27.373 "block_size": 512, 00:21:27.373 "num_blocks": 196608, 00:21:27.373 "uuid": "17e5f4a6-4ca4-4961-8f54-b707594d00d9", 00:21:27.373 "assigned_rate_limits": { 00:21:27.373 "rw_ios_per_sec": 0, 00:21:27.373 "rw_mbytes_per_sec": 0, 00:21:27.373 "r_mbytes_per_sec": 0, 00:21:27.373 "w_mbytes_per_sec": 0 00:21:27.373 }, 00:21:27.373 "claimed": false, 00:21:27.373 "zoned": false, 00:21:27.373 "supported_io_types": { 00:21:27.373 "read": true, 00:21:27.373 "write": true, 00:21:27.373 "unmap": false, 00:21:27.373 "flush": false, 00:21:27.373 "reset": true, 00:21:27.373 "nvme_admin": false, 00:21:27.373 "nvme_io": false, 00:21:27.373 "nvme_io_md": false, 00:21:27.373 "write_zeroes": true, 00:21:27.373 "zcopy": false, 00:21:27.373 "get_zone_info": false, 00:21:27.373 "zone_management": false, 00:21:27.373 "zone_append": false, 00:21:27.373 "compare": false, 00:21:27.373 "compare_and_write": false, 00:21:27.373 "abort": false, 00:21:27.373 "seek_hole": false, 00:21:27.373 "seek_data": false, 00:21:27.373 "copy": false, 00:21:27.373 "nvme_iov_md": false 00:21:27.373 }, 00:21:27.373 "driver_specific": { 00:21:27.373 "raid": { 00:21:27.373 "uuid": "17e5f4a6-4ca4-4961-8f54-b707594d00d9", 00:21:27.373 "strip_size_kb": 64, 00:21:27.373 "state": "online", 00:21:27.373 "raid_level": "raid5f", 00:21:27.373 "superblock": false, 00:21:27.373 "num_base_bdevs": 4, 00:21:27.373 "num_base_bdevs_discovered": 4, 00:21:27.373 "num_base_bdevs_operational": 4, 00:21:27.373 "base_bdevs_list": [ 00:21:27.373 { 00:21:27.373 "name": "BaseBdev1", 00:21:27.373 "uuid": "7720c965-8dd8-4b78-bc10-fdc32843bdd2", 00:21:27.373 "is_configured": true, 00:21:27.373 "data_offset": 0, 00:21:27.373 "data_size": 65536 00:21:27.373 }, 00:21:27.373 { 00:21:27.373 "name": "BaseBdev2", 00:21:27.373 "uuid": "a8d3f398-9a96-4b58-b1b7-98b555c9f6de", 00:21:27.373 "is_configured": true, 00:21:27.373 "data_offset": 0, 00:21:27.373 "data_size": 65536 00:21:27.373 }, 00:21:27.373 { 00:21:27.373 "name": "BaseBdev3", 00:21:27.373 "uuid": "0d055059-8a8f-4738-93ad-944b382fae35", 00:21:27.373 "is_configured": true, 00:21:27.373 "data_offset": 0, 00:21:27.373 "data_size": 65536 00:21:27.373 }, 00:21:27.373 { 00:21:27.373 "name": "BaseBdev4", 00:21:27.373 "uuid": "f8803bef-fbb2-47f2-b0e3-29cf02b98777", 00:21:27.373 "is_configured": true, 00:21:27.373 "data_offset": 0, 00:21:27.373 "data_size": 65536 00:21:27.373 } 00:21:27.373 ] 00:21:27.373 } 00:21:27.373 } 00:21:27.373 }' 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:27.373 BaseBdev2 00:21:27.373 BaseBdev3 00:21:27.373 BaseBdev4' 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.373 04:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.632 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.632 [2024-11-27 04:43:15.173644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.891 "name": "Existed_Raid", 00:21:27.891 "uuid": "17e5f4a6-4ca4-4961-8f54-b707594d00d9", 00:21:27.891 "strip_size_kb": 64, 00:21:27.891 "state": "online", 00:21:27.891 "raid_level": "raid5f", 00:21:27.891 "superblock": false, 00:21:27.891 "num_base_bdevs": 4, 00:21:27.891 "num_base_bdevs_discovered": 3, 00:21:27.891 "num_base_bdevs_operational": 3, 00:21:27.891 "base_bdevs_list": [ 00:21:27.891 { 00:21:27.891 "name": null, 00:21:27.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.891 "is_configured": false, 00:21:27.891 "data_offset": 0, 00:21:27.891 "data_size": 65536 00:21:27.891 }, 00:21:27.891 { 00:21:27.891 "name": "BaseBdev2", 00:21:27.891 "uuid": "a8d3f398-9a96-4b58-b1b7-98b555c9f6de", 00:21:27.891 "is_configured": true, 00:21:27.891 "data_offset": 0, 00:21:27.891 "data_size": 65536 00:21:27.891 }, 00:21:27.891 { 00:21:27.891 "name": "BaseBdev3", 00:21:27.891 "uuid": "0d055059-8a8f-4738-93ad-944b382fae35", 00:21:27.891 "is_configured": true, 00:21:27.891 "data_offset": 0, 00:21:27.891 "data_size": 65536 00:21:27.891 }, 00:21:27.891 { 00:21:27.891 "name": "BaseBdev4", 00:21:27.891 "uuid": "f8803bef-fbb2-47f2-b0e3-29cf02b98777", 00:21:27.891 "is_configured": true, 00:21:27.891 "data_offset": 0, 00:21:27.891 "data_size": 65536 00:21:27.891 } 00:21:27.891 ] 00:21:27.891 }' 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.891 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.469 [2024-11-27 04:43:15.836732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:28.469 [2024-11-27 04:43:15.836888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:28.469 [2024-11-27 04:43:15.924233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.469 04:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.469 [2024-11-27 04:43:15.976297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:28.469 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.469 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:28.469 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:28.469 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.469 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:28.469 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.469 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.469 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.727 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:28.727 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:28.727 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:28.727 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.727 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.727 [2024-11-27 04:43:16.125366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:28.728 [2024-11-27 04:43:16.125433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.728 BaseBdev2 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.728 [ 00:21:28.728 { 00:21:28.728 "name": "BaseBdev2", 00:21:28.728 "aliases": [ 00:21:28.728 "70820542-0c8e-4c52-bff1-e83f1bec1df4" 00:21:28.728 ], 00:21:28.728 "product_name": "Malloc disk", 00:21:28.728 "block_size": 512, 00:21:28.728 "num_blocks": 65536, 00:21:28.728 "uuid": "70820542-0c8e-4c52-bff1-e83f1bec1df4", 00:21:28.728 "assigned_rate_limits": { 00:21:28.728 "rw_ios_per_sec": 0, 00:21:28.728 "rw_mbytes_per_sec": 0, 00:21:28.728 "r_mbytes_per_sec": 0, 00:21:28.728 "w_mbytes_per_sec": 0 00:21:28.728 }, 00:21:28.728 "claimed": false, 00:21:28.728 "zoned": false, 00:21:28.728 "supported_io_types": { 00:21:28.728 "read": true, 00:21:28.728 "write": true, 00:21:28.728 "unmap": true, 00:21:28.728 "flush": true, 00:21:28.728 "reset": true, 00:21:28.728 "nvme_admin": false, 00:21:28.728 "nvme_io": false, 00:21:28.728 "nvme_io_md": false, 00:21:28.728 "write_zeroes": true, 00:21:28.728 "zcopy": true, 00:21:28.728 "get_zone_info": false, 00:21:28.728 "zone_management": false, 00:21:28.728 "zone_append": false, 00:21:28.728 "compare": false, 00:21:28.728 "compare_and_write": false, 00:21:28.728 "abort": true, 00:21:28.728 "seek_hole": false, 00:21:28.728 "seek_data": false, 00:21:28.728 "copy": true, 00:21:28.728 "nvme_iov_md": false 00:21:28.728 }, 00:21:28.728 "memory_domains": [ 00:21:28.728 { 00:21:28.728 "dma_device_id": "system", 00:21:28.728 "dma_device_type": 1 00:21:28.728 }, 00:21:28.728 { 00:21:28.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.728 "dma_device_type": 2 00:21:28.728 } 00:21:28.728 ], 00:21:28.728 "driver_specific": {} 00:21:28.728 } 00:21:28.728 ] 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.728 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.987 BaseBdev3 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.988 [ 00:21:28.988 { 00:21:28.988 "name": "BaseBdev3", 00:21:28.988 "aliases": [ 00:21:28.988 "41e0a737-1b10-4e16-8a82-6d8c70a66bb8" 00:21:28.988 ], 00:21:28.988 "product_name": "Malloc disk", 00:21:28.988 "block_size": 512, 00:21:28.988 "num_blocks": 65536, 00:21:28.988 "uuid": "41e0a737-1b10-4e16-8a82-6d8c70a66bb8", 00:21:28.988 "assigned_rate_limits": { 00:21:28.988 "rw_ios_per_sec": 0, 00:21:28.988 "rw_mbytes_per_sec": 0, 00:21:28.988 "r_mbytes_per_sec": 0, 00:21:28.988 "w_mbytes_per_sec": 0 00:21:28.988 }, 00:21:28.988 "claimed": false, 00:21:28.988 "zoned": false, 00:21:28.988 "supported_io_types": { 00:21:28.988 "read": true, 00:21:28.988 "write": true, 00:21:28.988 "unmap": true, 00:21:28.988 "flush": true, 00:21:28.988 "reset": true, 00:21:28.988 "nvme_admin": false, 00:21:28.988 "nvme_io": false, 00:21:28.988 "nvme_io_md": false, 00:21:28.988 "write_zeroes": true, 00:21:28.988 "zcopy": true, 00:21:28.988 "get_zone_info": false, 00:21:28.988 "zone_management": false, 00:21:28.988 "zone_append": false, 00:21:28.988 "compare": false, 00:21:28.988 "compare_and_write": false, 00:21:28.988 "abort": true, 00:21:28.988 "seek_hole": false, 00:21:28.988 "seek_data": false, 00:21:28.988 "copy": true, 00:21:28.988 "nvme_iov_md": false 00:21:28.988 }, 00:21:28.988 "memory_domains": [ 00:21:28.988 { 00:21:28.988 "dma_device_id": "system", 00:21:28.988 "dma_device_type": 1 00:21:28.988 }, 00:21:28.988 { 00:21:28.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.988 "dma_device_type": 2 00:21:28.988 } 00:21:28.988 ], 00:21:28.988 "driver_specific": {} 00:21:28.988 } 00:21:28.988 ] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.988 BaseBdev4 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.988 [ 00:21:28.988 { 00:21:28.988 "name": "BaseBdev4", 00:21:28.988 "aliases": [ 00:21:28.988 "12251cdf-9938-4fbb-837e-b528f5765499" 00:21:28.988 ], 00:21:28.988 "product_name": "Malloc disk", 00:21:28.988 "block_size": 512, 00:21:28.988 "num_blocks": 65536, 00:21:28.988 "uuid": "12251cdf-9938-4fbb-837e-b528f5765499", 00:21:28.988 "assigned_rate_limits": { 00:21:28.988 "rw_ios_per_sec": 0, 00:21:28.988 "rw_mbytes_per_sec": 0, 00:21:28.988 "r_mbytes_per_sec": 0, 00:21:28.988 "w_mbytes_per_sec": 0 00:21:28.988 }, 00:21:28.988 "claimed": false, 00:21:28.988 "zoned": false, 00:21:28.988 "supported_io_types": { 00:21:28.988 "read": true, 00:21:28.988 "write": true, 00:21:28.988 "unmap": true, 00:21:28.988 "flush": true, 00:21:28.988 "reset": true, 00:21:28.988 "nvme_admin": false, 00:21:28.988 "nvme_io": false, 00:21:28.988 "nvme_io_md": false, 00:21:28.988 "write_zeroes": true, 00:21:28.988 "zcopy": true, 00:21:28.988 "get_zone_info": false, 00:21:28.988 "zone_management": false, 00:21:28.988 "zone_append": false, 00:21:28.988 "compare": false, 00:21:28.988 "compare_and_write": false, 00:21:28.988 "abort": true, 00:21:28.988 "seek_hole": false, 00:21:28.988 "seek_data": false, 00:21:28.988 "copy": true, 00:21:28.988 "nvme_iov_md": false 00:21:28.988 }, 00:21:28.988 "memory_domains": [ 00:21:28.988 { 00:21:28.988 "dma_device_id": "system", 00:21:28.988 "dma_device_type": 1 00:21:28.988 }, 00:21:28.988 { 00:21:28.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.988 "dma_device_type": 2 00:21:28.988 } 00:21:28.988 ], 00:21:28.988 "driver_specific": {} 00:21:28.988 } 00:21:28.988 ] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.988 [2024-11-27 04:43:16.508305] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:28.988 [2024-11-27 04:43:16.508366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:28.988 [2024-11-27 04:43:16.508401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.988 [2024-11-27 04:43:16.510962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:28.988 [2024-11-27 04:43:16.511035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.988 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.989 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.989 "name": "Existed_Raid", 00:21:28.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.989 "strip_size_kb": 64, 00:21:28.989 "state": "configuring", 00:21:28.989 "raid_level": "raid5f", 00:21:28.989 "superblock": false, 00:21:28.989 "num_base_bdevs": 4, 00:21:28.989 "num_base_bdevs_discovered": 3, 00:21:28.989 "num_base_bdevs_operational": 4, 00:21:28.989 "base_bdevs_list": [ 00:21:28.989 { 00:21:28.989 "name": "BaseBdev1", 00:21:28.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.989 "is_configured": false, 00:21:28.989 "data_offset": 0, 00:21:28.989 "data_size": 0 00:21:28.989 }, 00:21:28.989 { 00:21:28.989 "name": "BaseBdev2", 00:21:28.989 "uuid": "70820542-0c8e-4c52-bff1-e83f1bec1df4", 00:21:28.989 "is_configured": true, 00:21:28.989 "data_offset": 0, 00:21:28.989 "data_size": 65536 00:21:28.989 }, 00:21:28.989 { 00:21:28.989 "name": "BaseBdev3", 00:21:28.989 "uuid": "41e0a737-1b10-4e16-8a82-6d8c70a66bb8", 00:21:28.989 "is_configured": true, 00:21:28.989 "data_offset": 0, 00:21:28.989 "data_size": 65536 00:21:28.989 }, 00:21:28.989 { 00:21:28.989 "name": "BaseBdev4", 00:21:28.989 "uuid": "12251cdf-9938-4fbb-837e-b528f5765499", 00:21:28.989 "is_configured": true, 00:21:28.989 "data_offset": 0, 00:21:28.989 "data_size": 65536 00:21:28.989 } 00:21:28.989 ] 00:21:28.989 }' 00:21:28.989 04:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.989 04:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.556 [2024-11-27 04:43:17.040467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.556 "name": "Existed_Raid", 00:21:29.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.556 "strip_size_kb": 64, 00:21:29.556 "state": "configuring", 00:21:29.556 "raid_level": "raid5f", 00:21:29.556 "superblock": false, 00:21:29.556 "num_base_bdevs": 4, 00:21:29.556 "num_base_bdevs_discovered": 2, 00:21:29.556 "num_base_bdevs_operational": 4, 00:21:29.556 "base_bdevs_list": [ 00:21:29.556 { 00:21:29.556 "name": "BaseBdev1", 00:21:29.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.556 "is_configured": false, 00:21:29.556 "data_offset": 0, 00:21:29.556 "data_size": 0 00:21:29.556 }, 00:21:29.556 { 00:21:29.556 "name": null, 00:21:29.556 "uuid": "70820542-0c8e-4c52-bff1-e83f1bec1df4", 00:21:29.556 "is_configured": false, 00:21:29.556 "data_offset": 0, 00:21:29.556 "data_size": 65536 00:21:29.556 }, 00:21:29.556 { 00:21:29.556 "name": "BaseBdev3", 00:21:29.556 "uuid": "41e0a737-1b10-4e16-8a82-6d8c70a66bb8", 00:21:29.556 "is_configured": true, 00:21:29.556 "data_offset": 0, 00:21:29.556 "data_size": 65536 00:21:29.556 }, 00:21:29.556 { 00:21:29.556 "name": "BaseBdev4", 00:21:29.556 "uuid": "12251cdf-9938-4fbb-837e-b528f5765499", 00:21:29.556 "is_configured": true, 00:21:29.556 "data_offset": 0, 00:21:29.556 "data_size": 65536 00:21:29.556 } 00:21:29.556 ] 00:21:29.556 }' 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.556 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.123 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.123 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:30.123 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.123 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.123 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.123 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:30.123 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:30.123 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.123 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.123 [2024-11-27 04:43:17.638469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:30.123 BaseBdev1 00:21:30.123 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.124 [ 00:21:30.124 { 00:21:30.124 "name": "BaseBdev1", 00:21:30.124 "aliases": [ 00:21:30.124 "77b275e4-ef3a-496f-9c16-e5caebbedd8c" 00:21:30.124 ], 00:21:30.124 "product_name": "Malloc disk", 00:21:30.124 "block_size": 512, 00:21:30.124 "num_blocks": 65536, 00:21:30.124 "uuid": "77b275e4-ef3a-496f-9c16-e5caebbedd8c", 00:21:30.124 "assigned_rate_limits": { 00:21:30.124 "rw_ios_per_sec": 0, 00:21:30.124 "rw_mbytes_per_sec": 0, 00:21:30.124 "r_mbytes_per_sec": 0, 00:21:30.124 "w_mbytes_per_sec": 0 00:21:30.124 }, 00:21:30.124 "claimed": true, 00:21:30.124 "claim_type": "exclusive_write", 00:21:30.124 "zoned": false, 00:21:30.124 "supported_io_types": { 00:21:30.124 "read": true, 00:21:30.124 "write": true, 00:21:30.124 "unmap": true, 00:21:30.124 "flush": true, 00:21:30.124 "reset": true, 00:21:30.124 "nvme_admin": false, 00:21:30.124 "nvme_io": false, 00:21:30.124 "nvme_io_md": false, 00:21:30.124 "write_zeroes": true, 00:21:30.124 "zcopy": true, 00:21:30.124 "get_zone_info": false, 00:21:30.124 "zone_management": false, 00:21:30.124 "zone_append": false, 00:21:30.124 "compare": false, 00:21:30.124 "compare_and_write": false, 00:21:30.124 "abort": true, 00:21:30.124 "seek_hole": false, 00:21:30.124 "seek_data": false, 00:21:30.124 "copy": true, 00:21:30.124 "nvme_iov_md": false 00:21:30.124 }, 00:21:30.124 "memory_domains": [ 00:21:30.124 { 00:21:30.124 "dma_device_id": "system", 00:21:30.124 "dma_device_type": 1 00:21:30.124 }, 00:21:30.124 { 00:21:30.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.124 "dma_device_type": 2 00:21:30.124 } 00:21:30.124 ], 00:21:30.124 "driver_specific": {} 00:21:30.124 } 00:21:30.124 ] 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.124 "name": "Existed_Raid", 00:21:30.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.124 "strip_size_kb": 64, 00:21:30.124 "state": "configuring", 00:21:30.124 "raid_level": "raid5f", 00:21:30.124 "superblock": false, 00:21:30.124 "num_base_bdevs": 4, 00:21:30.124 "num_base_bdevs_discovered": 3, 00:21:30.124 "num_base_bdevs_operational": 4, 00:21:30.124 "base_bdevs_list": [ 00:21:30.124 { 00:21:30.124 "name": "BaseBdev1", 00:21:30.124 "uuid": "77b275e4-ef3a-496f-9c16-e5caebbedd8c", 00:21:30.124 "is_configured": true, 00:21:30.124 "data_offset": 0, 00:21:30.124 "data_size": 65536 00:21:30.124 }, 00:21:30.124 { 00:21:30.124 "name": null, 00:21:30.124 "uuid": "70820542-0c8e-4c52-bff1-e83f1bec1df4", 00:21:30.124 "is_configured": false, 00:21:30.124 "data_offset": 0, 00:21:30.124 "data_size": 65536 00:21:30.124 }, 00:21:30.124 { 00:21:30.124 "name": "BaseBdev3", 00:21:30.124 "uuid": "41e0a737-1b10-4e16-8a82-6d8c70a66bb8", 00:21:30.124 "is_configured": true, 00:21:30.124 "data_offset": 0, 00:21:30.124 "data_size": 65536 00:21:30.124 }, 00:21:30.124 { 00:21:30.124 "name": "BaseBdev4", 00:21:30.124 "uuid": "12251cdf-9938-4fbb-837e-b528f5765499", 00:21:30.124 "is_configured": true, 00:21:30.124 "data_offset": 0, 00:21:30.124 "data_size": 65536 00:21:30.124 } 00:21:30.124 ] 00:21:30.124 }' 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.124 04:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.691 [2024-11-27 04:43:18.226746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.691 "name": "Existed_Raid", 00:21:30.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.691 "strip_size_kb": 64, 00:21:30.691 "state": "configuring", 00:21:30.691 "raid_level": "raid5f", 00:21:30.691 "superblock": false, 00:21:30.691 "num_base_bdevs": 4, 00:21:30.691 "num_base_bdevs_discovered": 2, 00:21:30.691 "num_base_bdevs_operational": 4, 00:21:30.691 "base_bdevs_list": [ 00:21:30.691 { 00:21:30.691 "name": "BaseBdev1", 00:21:30.691 "uuid": "77b275e4-ef3a-496f-9c16-e5caebbedd8c", 00:21:30.691 "is_configured": true, 00:21:30.691 "data_offset": 0, 00:21:30.691 "data_size": 65536 00:21:30.691 }, 00:21:30.691 { 00:21:30.691 "name": null, 00:21:30.691 "uuid": "70820542-0c8e-4c52-bff1-e83f1bec1df4", 00:21:30.691 "is_configured": false, 00:21:30.691 "data_offset": 0, 00:21:30.691 "data_size": 65536 00:21:30.691 }, 00:21:30.691 { 00:21:30.691 "name": null, 00:21:30.691 "uuid": "41e0a737-1b10-4e16-8a82-6d8c70a66bb8", 00:21:30.691 "is_configured": false, 00:21:30.691 "data_offset": 0, 00:21:30.691 "data_size": 65536 00:21:30.691 }, 00:21:30.691 { 00:21:30.691 "name": "BaseBdev4", 00:21:30.691 "uuid": "12251cdf-9938-4fbb-837e-b528f5765499", 00:21:30.691 "is_configured": true, 00:21:30.691 "data_offset": 0, 00:21:30.691 "data_size": 65536 00:21:30.691 } 00:21:30.691 ] 00:21:30.691 }' 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.691 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.258 [2024-11-27 04:43:18.802894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.258 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.259 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.259 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.259 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.259 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.259 "name": "Existed_Raid", 00:21:31.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.259 "strip_size_kb": 64, 00:21:31.259 "state": "configuring", 00:21:31.259 "raid_level": "raid5f", 00:21:31.259 "superblock": false, 00:21:31.259 "num_base_bdevs": 4, 00:21:31.259 "num_base_bdevs_discovered": 3, 00:21:31.259 "num_base_bdevs_operational": 4, 00:21:31.259 "base_bdevs_list": [ 00:21:31.259 { 00:21:31.259 "name": "BaseBdev1", 00:21:31.259 "uuid": "77b275e4-ef3a-496f-9c16-e5caebbedd8c", 00:21:31.259 "is_configured": true, 00:21:31.259 "data_offset": 0, 00:21:31.259 "data_size": 65536 00:21:31.259 }, 00:21:31.259 { 00:21:31.259 "name": null, 00:21:31.259 "uuid": "70820542-0c8e-4c52-bff1-e83f1bec1df4", 00:21:31.259 "is_configured": false, 00:21:31.259 "data_offset": 0, 00:21:31.259 "data_size": 65536 00:21:31.259 }, 00:21:31.259 { 00:21:31.259 "name": "BaseBdev3", 00:21:31.259 "uuid": "41e0a737-1b10-4e16-8a82-6d8c70a66bb8", 00:21:31.259 "is_configured": true, 00:21:31.259 "data_offset": 0, 00:21:31.259 "data_size": 65536 00:21:31.259 }, 00:21:31.259 { 00:21:31.259 "name": "BaseBdev4", 00:21:31.259 "uuid": "12251cdf-9938-4fbb-837e-b528f5765499", 00:21:31.259 "is_configured": true, 00:21:31.259 "data_offset": 0, 00:21:31.259 "data_size": 65536 00:21:31.259 } 00:21:31.259 ] 00:21:31.259 }' 00:21:31.259 04:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.259 04:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.824 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.824 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:31.824 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.824 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.824 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.824 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:31.824 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:31.824 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.824 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.824 [2024-11-27 04:43:19.383056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.082 "name": "Existed_Raid", 00:21:32.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.082 "strip_size_kb": 64, 00:21:32.082 "state": "configuring", 00:21:32.082 "raid_level": "raid5f", 00:21:32.082 "superblock": false, 00:21:32.082 "num_base_bdevs": 4, 00:21:32.082 "num_base_bdevs_discovered": 2, 00:21:32.082 "num_base_bdevs_operational": 4, 00:21:32.082 "base_bdevs_list": [ 00:21:32.082 { 00:21:32.082 "name": null, 00:21:32.082 "uuid": "77b275e4-ef3a-496f-9c16-e5caebbedd8c", 00:21:32.082 "is_configured": false, 00:21:32.082 "data_offset": 0, 00:21:32.082 "data_size": 65536 00:21:32.082 }, 00:21:32.082 { 00:21:32.082 "name": null, 00:21:32.082 "uuid": "70820542-0c8e-4c52-bff1-e83f1bec1df4", 00:21:32.082 "is_configured": false, 00:21:32.082 "data_offset": 0, 00:21:32.082 "data_size": 65536 00:21:32.082 }, 00:21:32.082 { 00:21:32.082 "name": "BaseBdev3", 00:21:32.082 "uuid": "41e0a737-1b10-4e16-8a82-6d8c70a66bb8", 00:21:32.082 "is_configured": true, 00:21:32.082 "data_offset": 0, 00:21:32.082 "data_size": 65536 00:21:32.082 }, 00:21:32.082 { 00:21:32.082 "name": "BaseBdev4", 00:21:32.082 "uuid": "12251cdf-9938-4fbb-837e-b528f5765499", 00:21:32.082 "is_configured": true, 00:21:32.082 "data_offset": 0, 00:21:32.082 "data_size": 65536 00:21:32.082 } 00:21:32.082 ] 00:21:32.082 }' 00:21:32.082 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.083 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.649 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.649 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.649 04:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.649 04:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.649 [2024-11-27 04:43:20.037225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.649 "name": "Existed_Raid", 00:21:32.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.649 "strip_size_kb": 64, 00:21:32.649 "state": "configuring", 00:21:32.649 "raid_level": "raid5f", 00:21:32.649 "superblock": false, 00:21:32.649 "num_base_bdevs": 4, 00:21:32.649 "num_base_bdevs_discovered": 3, 00:21:32.649 "num_base_bdevs_operational": 4, 00:21:32.649 "base_bdevs_list": [ 00:21:32.649 { 00:21:32.649 "name": null, 00:21:32.649 "uuid": "77b275e4-ef3a-496f-9c16-e5caebbedd8c", 00:21:32.649 "is_configured": false, 00:21:32.649 "data_offset": 0, 00:21:32.649 "data_size": 65536 00:21:32.649 }, 00:21:32.649 { 00:21:32.649 "name": "BaseBdev2", 00:21:32.649 "uuid": "70820542-0c8e-4c52-bff1-e83f1bec1df4", 00:21:32.649 "is_configured": true, 00:21:32.649 "data_offset": 0, 00:21:32.649 "data_size": 65536 00:21:32.649 }, 00:21:32.649 { 00:21:32.649 "name": "BaseBdev3", 00:21:32.649 "uuid": "41e0a737-1b10-4e16-8a82-6d8c70a66bb8", 00:21:32.649 "is_configured": true, 00:21:32.649 "data_offset": 0, 00:21:32.649 "data_size": 65536 00:21:32.649 }, 00:21:32.649 { 00:21:32.649 "name": "BaseBdev4", 00:21:32.649 "uuid": "12251cdf-9938-4fbb-837e-b528f5765499", 00:21:32.649 "is_configured": true, 00:21:32.649 "data_offset": 0, 00:21:32.649 "data_size": 65536 00:21:32.649 } 00:21:32.649 ] 00:21:32.649 }' 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.649 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.907 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.907 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.907 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.907 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:32.907 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 77b275e4-ef3a-496f-9c16-e5caebbedd8c 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.165 [2024-11-27 04:43:20.627028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:33.165 [2024-11-27 04:43:20.627097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:33.165 [2024-11-27 04:43:20.627110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:33.165 [2024-11-27 04:43:20.627435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:33.165 [2024-11-27 04:43:20.633824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:33.165 NewBaseBdev 00:21:33.165 [2024-11-27 04:43:20.633989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:33.165 [2024-11-27 04:43:20.634320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.165 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.165 [ 00:21:33.165 { 00:21:33.165 "name": "NewBaseBdev", 00:21:33.165 "aliases": [ 00:21:33.165 "77b275e4-ef3a-496f-9c16-e5caebbedd8c" 00:21:33.165 ], 00:21:33.165 "product_name": "Malloc disk", 00:21:33.165 "block_size": 512, 00:21:33.165 "num_blocks": 65536, 00:21:33.165 "uuid": "77b275e4-ef3a-496f-9c16-e5caebbedd8c", 00:21:33.165 "assigned_rate_limits": { 00:21:33.165 "rw_ios_per_sec": 0, 00:21:33.165 "rw_mbytes_per_sec": 0, 00:21:33.165 "r_mbytes_per_sec": 0, 00:21:33.165 "w_mbytes_per_sec": 0 00:21:33.165 }, 00:21:33.165 "claimed": true, 00:21:33.165 "claim_type": "exclusive_write", 00:21:33.165 "zoned": false, 00:21:33.165 "supported_io_types": { 00:21:33.165 "read": true, 00:21:33.165 "write": true, 00:21:33.165 "unmap": true, 00:21:33.165 "flush": true, 00:21:33.165 "reset": true, 00:21:33.165 "nvme_admin": false, 00:21:33.165 "nvme_io": false, 00:21:33.165 "nvme_io_md": false, 00:21:33.165 "write_zeroes": true, 00:21:33.165 "zcopy": true, 00:21:33.165 "get_zone_info": false, 00:21:33.165 "zone_management": false, 00:21:33.165 "zone_append": false, 00:21:33.165 "compare": false, 00:21:33.165 "compare_and_write": false, 00:21:33.166 "abort": true, 00:21:33.166 "seek_hole": false, 00:21:33.166 "seek_data": false, 00:21:33.166 "copy": true, 00:21:33.166 "nvme_iov_md": false 00:21:33.166 }, 00:21:33.166 "memory_domains": [ 00:21:33.166 { 00:21:33.166 "dma_device_id": "system", 00:21:33.166 "dma_device_type": 1 00:21:33.166 }, 00:21:33.166 { 00:21:33.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.166 "dma_device_type": 2 00:21:33.166 } 00:21:33.166 ], 00:21:33.166 "driver_specific": {} 00:21:33.166 } 00:21:33.166 ] 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.166 "name": "Existed_Raid", 00:21:33.166 "uuid": "d40eda48-4d1e-4e29-9d2d-54c0d64a39ca", 00:21:33.166 "strip_size_kb": 64, 00:21:33.166 "state": "online", 00:21:33.166 "raid_level": "raid5f", 00:21:33.166 "superblock": false, 00:21:33.166 "num_base_bdevs": 4, 00:21:33.166 "num_base_bdevs_discovered": 4, 00:21:33.166 "num_base_bdevs_operational": 4, 00:21:33.166 "base_bdevs_list": [ 00:21:33.166 { 00:21:33.166 "name": "NewBaseBdev", 00:21:33.166 "uuid": "77b275e4-ef3a-496f-9c16-e5caebbedd8c", 00:21:33.166 "is_configured": true, 00:21:33.166 "data_offset": 0, 00:21:33.166 "data_size": 65536 00:21:33.166 }, 00:21:33.166 { 00:21:33.166 "name": "BaseBdev2", 00:21:33.166 "uuid": "70820542-0c8e-4c52-bff1-e83f1bec1df4", 00:21:33.166 "is_configured": true, 00:21:33.166 "data_offset": 0, 00:21:33.166 "data_size": 65536 00:21:33.166 }, 00:21:33.166 { 00:21:33.166 "name": "BaseBdev3", 00:21:33.166 "uuid": "41e0a737-1b10-4e16-8a82-6d8c70a66bb8", 00:21:33.166 "is_configured": true, 00:21:33.166 "data_offset": 0, 00:21:33.166 "data_size": 65536 00:21:33.166 }, 00:21:33.166 { 00:21:33.166 "name": "BaseBdev4", 00:21:33.166 "uuid": "12251cdf-9938-4fbb-837e-b528f5765499", 00:21:33.166 "is_configured": true, 00:21:33.166 "data_offset": 0, 00:21:33.166 "data_size": 65536 00:21:33.166 } 00:21:33.166 ] 00:21:33.166 }' 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.166 04:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:33.765 [2024-11-27 04:43:21.169982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.765 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:33.765 "name": "Existed_Raid", 00:21:33.765 "aliases": [ 00:21:33.765 "d40eda48-4d1e-4e29-9d2d-54c0d64a39ca" 00:21:33.765 ], 00:21:33.765 "product_name": "Raid Volume", 00:21:33.765 "block_size": 512, 00:21:33.765 "num_blocks": 196608, 00:21:33.765 "uuid": "d40eda48-4d1e-4e29-9d2d-54c0d64a39ca", 00:21:33.765 "assigned_rate_limits": { 00:21:33.765 "rw_ios_per_sec": 0, 00:21:33.765 "rw_mbytes_per_sec": 0, 00:21:33.765 "r_mbytes_per_sec": 0, 00:21:33.765 "w_mbytes_per_sec": 0 00:21:33.766 }, 00:21:33.766 "claimed": false, 00:21:33.766 "zoned": false, 00:21:33.766 "supported_io_types": { 00:21:33.766 "read": true, 00:21:33.766 "write": true, 00:21:33.766 "unmap": false, 00:21:33.766 "flush": false, 00:21:33.766 "reset": true, 00:21:33.766 "nvme_admin": false, 00:21:33.766 "nvme_io": false, 00:21:33.766 "nvme_io_md": false, 00:21:33.766 "write_zeroes": true, 00:21:33.766 "zcopy": false, 00:21:33.766 "get_zone_info": false, 00:21:33.766 "zone_management": false, 00:21:33.766 "zone_append": false, 00:21:33.766 "compare": false, 00:21:33.766 "compare_and_write": false, 00:21:33.766 "abort": false, 00:21:33.766 "seek_hole": false, 00:21:33.766 "seek_data": false, 00:21:33.766 "copy": false, 00:21:33.766 "nvme_iov_md": false 00:21:33.766 }, 00:21:33.766 "driver_specific": { 00:21:33.766 "raid": { 00:21:33.766 "uuid": "d40eda48-4d1e-4e29-9d2d-54c0d64a39ca", 00:21:33.766 "strip_size_kb": 64, 00:21:33.766 "state": "online", 00:21:33.766 "raid_level": "raid5f", 00:21:33.766 "superblock": false, 00:21:33.766 "num_base_bdevs": 4, 00:21:33.766 "num_base_bdevs_discovered": 4, 00:21:33.766 "num_base_bdevs_operational": 4, 00:21:33.766 "base_bdevs_list": [ 00:21:33.766 { 00:21:33.766 "name": "NewBaseBdev", 00:21:33.766 "uuid": "77b275e4-ef3a-496f-9c16-e5caebbedd8c", 00:21:33.766 "is_configured": true, 00:21:33.766 "data_offset": 0, 00:21:33.766 "data_size": 65536 00:21:33.766 }, 00:21:33.766 { 00:21:33.766 "name": "BaseBdev2", 00:21:33.766 "uuid": "70820542-0c8e-4c52-bff1-e83f1bec1df4", 00:21:33.766 "is_configured": true, 00:21:33.766 "data_offset": 0, 00:21:33.766 "data_size": 65536 00:21:33.766 }, 00:21:33.766 { 00:21:33.766 "name": "BaseBdev3", 00:21:33.766 "uuid": "41e0a737-1b10-4e16-8a82-6d8c70a66bb8", 00:21:33.766 "is_configured": true, 00:21:33.766 "data_offset": 0, 00:21:33.766 "data_size": 65536 00:21:33.766 }, 00:21:33.766 { 00:21:33.766 "name": "BaseBdev4", 00:21:33.766 "uuid": "12251cdf-9938-4fbb-837e-b528f5765499", 00:21:33.766 "is_configured": true, 00:21:33.766 "data_offset": 0, 00:21:33.766 "data_size": 65536 00:21:33.766 } 00:21:33.766 ] 00:21:33.766 } 00:21:33.766 } 00:21:33.766 }' 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:33.766 BaseBdev2 00:21:33.766 BaseBdev3 00:21:33.766 BaseBdev4' 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.766 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:34.023 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.024 [2024-11-27 04:43:21.541758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:34.024 [2024-11-27 04:43:21.541927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.024 [2024-11-27 04:43:21.542122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.024 [2024-11-27 04:43:21.542627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.024 [2024-11-27 04:43:21.542656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83308 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83308 ']' 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83308 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83308 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.024 killing process with pid 83308 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83308' 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83308 00:21:34.024 [2024-11-27 04:43:21.582382] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:34.024 04:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83308 00:21:34.587 [2024-11-27 04:43:21.950333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:35.520 ************************************ 00:21:35.520 END TEST raid5f_state_function_test 00:21:35.520 ************************************ 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:35.520 00:21:35.520 real 0m12.781s 00:21:35.520 user 0m21.108s 00:21:35.520 sys 0m1.841s 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.520 04:43:23 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:21:35.520 04:43:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:35.520 04:43:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.520 04:43:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.520 ************************************ 00:21:35.520 START TEST raid5f_state_function_test_sb 00:21:35.520 ************************************ 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:35.520 Process raid pid: 83987 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83987 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83987' 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83987 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83987 ']' 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.520 04:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.778 [2024-11-27 04:43:23.208374] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:21:35.778 [2024-11-27 04:43:23.208570] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.036 [2024-11-27 04:43:23.410525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.036 [2024-11-27 04:43:23.571878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.295 [2024-11-27 04:43:23.811513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.295 [2024-11-27 04:43:23.811580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.860 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.860 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:36.860 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:36.860 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.860 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.861 [2024-11-27 04:43:24.274107] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:36.861 [2024-11-27 04:43:24.274192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:36.861 [2024-11-27 04:43:24.274210] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:36.861 [2024-11-27 04:43:24.274226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:36.861 [2024-11-27 04:43:24.274236] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:36.861 [2024-11-27 04:43:24.274251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:36.861 [2024-11-27 04:43:24.274261] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:36.861 [2024-11-27 04:43:24.274275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.861 "name": "Existed_Raid", 00:21:36.861 "uuid": "3af3f667-167f-4016-ad7d-2ec23d1430e2", 00:21:36.861 "strip_size_kb": 64, 00:21:36.861 "state": "configuring", 00:21:36.861 "raid_level": "raid5f", 00:21:36.861 "superblock": true, 00:21:36.861 "num_base_bdevs": 4, 00:21:36.861 "num_base_bdevs_discovered": 0, 00:21:36.861 "num_base_bdevs_operational": 4, 00:21:36.861 "base_bdevs_list": [ 00:21:36.861 { 00:21:36.861 "name": "BaseBdev1", 00:21:36.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.861 "is_configured": false, 00:21:36.861 "data_offset": 0, 00:21:36.861 "data_size": 0 00:21:36.861 }, 00:21:36.861 { 00:21:36.861 "name": "BaseBdev2", 00:21:36.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.861 "is_configured": false, 00:21:36.861 "data_offset": 0, 00:21:36.861 "data_size": 0 00:21:36.861 }, 00:21:36.861 { 00:21:36.861 "name": "BaseBdev3", 00:21:36.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.861 "is_configured": false, 00:21:36.861 "data_offset": 0, 00:21:36.861 "data_size": 0 00:21:36.861 }, 00:21:36.861 { 00:21:36.861 "name": "BaseBdev4", 00:21:36.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.861 "is_configured": false, 00:21:36.861 "data_offset": 0, 00:21:36.861 "data_size": 0 00:21:36.861 } 00:21:36.861 ] 00:21:36.861 }' 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.861 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.428 [2024-11-27 04:43:24.802201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.428 [2024-11-27 04:43:24.802384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.428 [2024-11-27 04:43:24.810197] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.428 [2024-11-27 04:43:24.810259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.428 [2024-11-27 04:43:24.810274] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.428 [2024-11-27 04:43:24.810290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.428 [2024-11-27 04:43:24.810300] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:37.428 [2024-11-27 04:43:24.810314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:37.428 [2024-11-27 04:43:24.810324] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:37.428 [2024-11-27 04:43:24.810339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.428 [2024-11-27 04:43:24.855788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.428 BaseBdev1 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.428 [ 00:21:37.428 { 00:21:37.428 "name": "BaseBdev1", 00:21:37.428 "aliases": [ 00:21:37.428 "9b46df2e-d2b4-4670-84a5-c23331084699" 00:21:37.428 ], 00:21:37.428 "product_name": "Malloc disk", 00:21:37.428 "block_size": 512, 00:21:37.428 "num_blocks": 65536, 00:21:37.428 "uuid": "9b46df2e-d2b4-4670-84a5-c23331084699", 00:21:37.428 "assigned_rate_limits": { 00:21:37.428 "rw_ios_per_sec": 0, 00:21:37.428 "rw_mbytes_per_sec": 0, 00:21:37.428 "r_mbytes_per_sec": 0, 00:21:37.428 "w_mbytes_per_sec": 0 00:21:37.428 }, 00:21:37.428 "claimed": true, 00:21:37.428 "claim_type": "exclusive_write", 00:21:37.428 "zoned": false, 00:21:37.428 "supported_io_types": { 00:21:37.428 "read": true, 00:21:37.428 "write": true, 00:21:37.428 "unmap": true, 00:21:37.428 "flush": true, 00:21:37.428 "reset": true, 00:21:37.428 "nvme_admin": false, 00:21:37.428 "nvme_io": false, 00:21:37.428 "nvme_io_md": false, 00:21:37.428 "write_zeroes": true, 00:21:37.428 "zcopy": true, 00:21:37.428 "get_zone_info": false, 00:21:37.428 "zone_management": false, 00:21:37.428 "zone_append": false, 00:21:37.428 "compare": false, 00:21:37.428 "compare_and_write": false, 00:21:37.428 "abort": true, 00:21:37.428 "seek_hole": false, 00:21:37.428 "seek_data": false, 00:21:37.428 "copy": true, 00:21:37.428 "nvme_iov_md": false 00:21:37.428 }, 00:21:37.428 "memory_domains": [ 00:21:37.428 { 00:21:37.428 "dma_device_id": "system", 00:21:37.428 "dma_device_type": 1 00:21:37.428 }, 00:21:37.428 { 00:21:37.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.428 "dma_device_type": 2 00:21:37.428 } 00:21:37.428 ], 00:21:37.428 "driver_specific": {} 00:21:37.428 } 00:21:37.428 ] 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.428 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.429 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.429 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.429 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.429 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.429 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.429 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.429 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.429 "name": "Existed_Raid", 00:21:37.429 "uuid": "a4cbf394-226a-4c11-8e5a-c96255c1f9ee", 00:21:37.429 "strip_size_kb": 64, 00:21:37.429 "state": "configuring", 00:21:37.429 "raid_level": "raid5f", 00:21:37.429 "superblock": true, 00:21:37.429 "num_base_bdevs": 4, 00:21:37.429 "num_base_bdevs_discovered": 1, 00:21:37.429 "num_base_bdevs_operational": 4, 00:21:37.429 "base_bdevs_list": [ 00:21:37.429 { 00:21:37.429 "name": "BaseBdev1", 00:21:37.429 "uuid": "9b46df2e-d2b4-4670-84a5-c23331084699", 00:21:37.429 "is_configured": true, 00:21:37.429 "data_offset": 2048, 00:21:37.429 "data_size": 63488 00:21:37.429 }, 00:21:37.429 { 00:21:37.429 "name": "BaseBdev2", 00:21:37.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.429 "is_configured": false, 00:21:37.429 "data_offset": 0, 00:21:37.429 "data_size": 0 00:21:37.429 }, 00:21:37.429 { 00:21:37.429 "name": "BaseBdev3", 00:21:37.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.429 "is_configured": false, 00:21:37.429 "data_offset": 0, 00:21:37.429 "data_size": 0 00:21:37.429 }, 00:21:37.429 { 00:21:37.429 "name": "BaseBdev4", 00:21:37.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.429 "is_configured": false, 00:21:37.429 "data_offset": 0, 00:21:37.429 "data_size": 0 00:21:37.429 } 00:21:37.429 ] 00:21:37.429 }' 00:21:37.429 04:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.429 04:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.996 [2024-11-27 04:43:25.400001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.996 [2024-11-27 04:43:25.400200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.996 [2024-11-27 04:43:25.408174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.996 [2024-11-27 04:43:25.410735] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.996 [2024-11-27 04:43:25.410925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.996 [2024-11-27 04:43:25.411046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:37.996 [2024-11-27 04:43:25.411172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:37.996 [2024-11-27 04:43:25.411286] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:37.996 [2024-11-27 04:43:25.411346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.996 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.997 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.997 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.997 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.997 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.997 "name": "Existed_Raid", 00:21:37.997 "uuid": "e5c1b7f6-a84d-4778-8acb-dffb1a31c6eb", 00:21:37.997 "strip_size_kb": 64, 00:21:37.997 "state": "configuring", 00:21:37.997 "raid_level": "raid5f", 00:21:37.997 "superblock": true, 00:21:37.997 "num_base_bdevs": 4, 00:21:37.997 "num_base_bdevs_discovered": 1, 00:21:37.997 "num_base_bdevs_operational": 4, 00:21:37.997 "base_bdevs_list": [ 00:21:37.997 { 00:21:37.997 "name": "BaseBdev1", 00:21:37.997 "uuid": "9b46df2e-d2b4-4670-84a5-c23331084699", 00:21:37.997 "is_configured": true, 00:21:37.997 "data_offset": 2048, 00:21:37.997 "data_size": 63488 00:21:37.997 }, 00:21:37.997 { 00:21:37.997 "name": "BaseBdev2", 00:21:37.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.997 "is_configured": false, 00:21:37.997 "data_offset": 0, 00:21:37.997 "data_size": 0 00:21:37.997 }, 00:21:37.997 { 00:21:37.997 "name": "BaseBdev3", 00:21:37.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.997 "is_configured": false, 00:21:37.997 "data_offset": 0, 00:21:37.997 "data_size": 0 00:21:37.997 }, 00:21:37.997 { 00:21:37.997 "name": "BaseBdev4", 00:21:37.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.997 "is_configured": false, 00:21:37.997 "data_offset": 0, 00:21:37.997 "data_size": 0 00:21:37.997 } 00:21:37.997 ] 00:21:37.997 }' 00:21:37.997 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.997 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:38.565 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.565 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 [2024-11-27 04:43:25.965599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.565 BaseBdev2 00:21:38.565 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.565 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:38.565 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:38.565 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:38.565 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 [ 00:21:38.566 { 00:21:38.566 "name": "BaseBdev2", 00:21:38.566 "aliases": [ 00:21:38.566 "3a38f864-398d-4284-999b-9cee81e69768" 00:21:38.566 ], 00:21:38.566 "product_name": "Malloc disk", 00:21:38.566 "block_size": 512, 00:21:38.566 "num_blocks": 65536, 00:21:38.566 "uuid": "3a38f864-398d-4284-999b-9cee81e69768", 00:21:38.566 "assigned_rate_limits": { 00:21:38.566 "rw_ios_per_sec": 0, 00:21:38.566 "rw_mbytes_per_sec": 0, 00:21:38.566 "r_mbytes_per_sec": 0, 00:21:38.566 "w_mbytes_per_sec": 0 00:21:38.566 }, 00:21:38.566 "claimed": true, 00:21:38.566 "claim_type": "exclusive_write", 00:21:38.566 "zoned": false, 00:21:38.566 "supported_io_types": { 00:21:38.566 "read": true, 00:21:38.566 "write": true, 00:21:38.566 "unmap": true, 00:21:38.566 "flush": true, 00:21:38.566 "reset": true, 00:21:38.566 "nvme_admin": false, 00:21:38.566 "nvme_io": false, 00:21:38.566 "nvme_io_md": false, 00:21:38.566 "write_zeroes": true, 00:21:38.566 "zcopy": true, 00:21:38.566 "get_zone_info": false, 00:21:38.566 "zone_management": false, 00:21:38.566 "zone_append": false, 00:21:38.566 "compare": false, 00:21:38.566 "compare_and_write": false, 00:21:38.566 "abort": true, 00:21:38.566 "seek_hole": false, 00:21:38.566 "seek_data": false, 00:21:38.566 "copy": true, 00:21:38.566 "nvme_iov_md": false 00:21:38.566 }, 00:21:38.566 "memory_domains": [ 00:21:38.566 { 00:21:38.566 "dma_device_id": "system", 00:21:38.566 "dma_device_type": 1 00:21:38.566 }, 00:21:38.566 { 00:21:38.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.566 "dma_device_type": 2 00:21:38.566 } 00:21:38.566 ], 00:21:38.566 "driver_specific": {} 00:21:38.566 } 00:21:38.566 ] 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.566 04:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.566 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.566 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.566 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.566 "name": "Existed_Raid", 00:21:38.566 "uuid": "e5c1b7f6-a84d-4778-8acb-dffb1a31c6eb", 00:21:38.566 "strip_size_kb": 64, 00:21:38.566 "state": "configuring", 00:21:38.566 "raid_level": "raid5f", 00:21:38.566 "superblock": true, 00:21:38.566 "num_base_bdevs": 4, 00:21:38.566 "num_base_bdevs_discovered": 2, 00:21:38.566 "num_base_bdevs_operational": 4, 00:21:38.566 "base_bdevs_list": [ 00:21:38.566 { 00:21:38.566 "name": "BaseBdev1", 00:21:38.566 "uuid": "9b46df2e-d2b4-4670-84a5-c23331084699", 00:21:38.566 "is_configured": true, 00:21:38.566 "data_offset": 2048, 00:21:38.566 "data_size": 63488 00:21:38.566 }, 00:21:38.566 { 00:21:38.566 "name": "BaseBdev2", 00:21:38.566 "uuid": "3a38f864-398d-4284-999b-9cee81e69768", 00:21:38.566 "is_configured": true, 00:21:38.566 "data_offset": 2048, 00:21:38.566 "data_size": 63488 00:21:38.566 }, 00:21:38.566 { 00:21:38.566 "name": "BaseBdev3", 00:21:38.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.566 "is_configured": false, 00:21:38.566 "data_offset": 0, 00:21:38.566 "data_size": 0 00:21:38.566 }, 00:21:38.566 { 00:21:38.566 "name": "BaseBdev4", 00:21:38.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.566 "is_configured": false, 00:21:38.566 "data_offset": 0, 00:21:38.566 "data_size": 0 00:21:38.566 } 00:21:38.566 ] 00:21:38.566 }' 00:21:38.566 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.566 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.133 [2024-11-27 04:43:26.557471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:39.133 BaseBdev3 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.133 [ 00:21:39.133 { 00:21:39.133 "name": "BaseBdev3", 00:21:39.133 "aliases": [ 00:21:39.133 "caa21ccc-8d0c-46c1-8517-bde948f1ed7a" 00:21:39.133 ], 00:21:39.133 "product_name": "Malloc disk", 00:21:39.133 "block_size": 512, 00:21:39.133 "num_blocks": 65536, 00:21:39.133 "uuid": "caa21ccc-8d0c-46c1-8517-bde948f1ed7a", 00:21:39.133 "assigned_rate_limits": { 00:21:39.133 "rw_ios_per_sec": 0, 00:21:39.133 "rw_mbytes_per_sec": 0, 00:21:39.133 "r_mbytes_per_sec": 0, 00:21:39.133 "w_mbytes_per_sec": 0 00:21:39.133 }, 00:21:39.133 "claimed": true, 00:21:39.133 "claim_type": "exclusive_write", 00:21:39.133 "zoned": false, 00:21:39.133 "supported_io_types": { 00:21:39.133 "read": true, 00:21:39.133 "write": true, 00:21:39.133 "unmap": true, 00:21:39.133 "flush": true, 00:21:39.133 "reset": true, 00:21:39.133 "nvme_admin": false, 00:21:39.133 "nvme_io": false, 00:21:39.133 "nvme_io_md": false, 00:21:39.133 "write_zeroes": true, 00:21:39.133 "zcopy": true, 00:21:39.133 "get_zone_info": false, 00:21:39.133 "zone_management": false, 00:21:39.133 "zone_append": false, 00:21:39.133 "compare": false, 00:21:39.133 "compare_and_write": false, 00:21:39.133 "abort": true, 00:21:39.133 "seek_hole": false, 00:21:39.133 "seek_data": false, 00:21:39.133 "copy": true, 00:21:39.133 "nvme_iov_md": false 00:21:39.133 }, 00:21:39.133 "memory_domains": [ 00:21:39.133 { 00:21:39.133 "dma_device_id": "system", 00:21:39.133 "dma_device_type": 1 00:21:39.133 }, 00:21:39.133 { 00:21:39.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.133 "dma_device_type": 2 00:21:39.133 } 00:21:39.133 ], 00:21:39.133 "driver_specific": {} 00:21:39.133 } 00:21:39.133 ] 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.133 "name": "Existed_Raid", 00:21:39.133 "uuid": "e5c1b7f6-a84d-4778-8acb-dffb1a31c6eb", 00:21:39.133 "strip_size_kb": 64, 00:21:39.133 "state": "configuring", 00:21:39.133 "raid_level": "raid5f", 00:21:39.133 "superblock": true, 00:21:39.133 "num_base_bdevs": 4, 00:21:39.133 "num_base_bdevs_discovered": 3, 00:21:39.133 "num_base_bdevs_operational": 4, 00:21:39.133 "base_bdevs_list": [ 00:21:39.133 { 00:21:39.133 "name": "BaseBdev1", 00:21:39.133 "uuid": "9b46df2e-d2b4-4670-84a5-c23331084699", 00:21:39.133 "is_configured": true, 00:21:39.133 "data_offset": 2048, 00:21:39.133 "data_size": 63488 00:21:39.133 }, 00:21:39.133 { 00:21:39.133 "name": "BaseBdev2", 00:21:39.133 "uuid": "3a38f864-398d-4284-999b-9cee81e69768", 00:21:39.133 "is_configured": true, 00:21:39.133 "data_offset": 2048, 00:21:39.133 "data_size": 63488 00:21:39.133 }, 00:21:39.133 { 00:21:39.133 "name": "BaseBdev3", 00:21:39.133 "uuid": "caa21ccc-8d0c-46c1-8517-bde948f1ed7a", 00:21:39.133 "is_configured": true, 00:21:39.133 "data_offset": 2048, 00:21:39.133 "data_size": 63488 00:21:39.133 }, 00:21:39.133 { 00:21:39.133 "name": "BaseBdev4", 00:21:39.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.133 "is_configured": false, 00:21:39.133 "data_offset": 0, 00:21:39.133 "data_size": 0 00:21:39.133 } 00:21:39.133 ] 00:21:39.133 }' 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.133 04:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.739 [2024-11-27 04:43:27.136154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:39.739 BaseBdev4 00:21:39.739 [2024-11-27 04:43:27.136731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:39.739 [2024-11-27 04:43:27.136757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:39.739 [2024-11-27 04:43:27.137116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.739 [2024-11-27 04:43:27.144029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:39.739 [2024-11-27 04:43:27.144062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:39.739 [2024-11-27 04:43:27.144376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.739 [ 00:21:39.739 { 00:21:39.739 "name": "BaseBdev4", 00:21:39.739 "aliases": [ 00:21:39.739 "10e7faaa-5dd8-44df-afc2-5840aba54d68" 00:21:39.739 ], 00:21:39.739 "product_name": "Malloc disk", 00:21:39.739 "block_size": 512, 00:21:39.739 "num_blocks": 65536, 00:21:39.739 "uuid": "10e7faaa-5dd8-44df-afc2-5840aba54d68", 00:21:39.739 "assigned_rate_limits": { 00:21:39.739 "rw_ios_per_sec": 0, 00:21:39.739 "rw_mbytes_per_sec": 0, 00:21:39.739 "r_mbytes_per_sec": 0, 00:21:39.739 "w_mbytes_per_sec": 0 00:21:39.739 }, 00:21:39.739 "claimed": true, 00:21:39.739 "claim_type": "exclusive_write", 00:21:39.739 "zoned": false, 00:21:39.739 "supported_io_types": { 00:21:39.739 "read": true, 00:21:39.739 "write": true, 00:21:39.739 "unmap": true, 00:21:39.739 "flush": true, 00:21:39.739 "reset": true, 00:21:39.739 "nvme_admin": false, 00:21:39.739 "nvme_io": false, 00:21:39.739 "nvme_io_md": false, 00:21:39.739 "write_zeroes": true, 00:21:39.739 "zcopy": true, 00:21:39.739 "get_zone_info": false, 00:21:39.739 "zone_management": false, 00:21:39.739 "zone_append": false, 00:21:39.739 "compare": false, 00:21:39.739 "compare_and_write": false, 00:21:39.739 "abort": true, 00:21:39.739 "seek_hole": false, 00:21:39.739 "seek_data": false, 00:21:39.739 "copy": true, 00:21:39.739 "nvme_iov_md": false 00:21:39.739 }, 00:21:39.739 "memory_domains": [ 00:21:39.739 { 00:21:39.739 "dma_device_id": "system", 00:21:39.739 "dma_device_type": 1 00:21:39.739 }, 00:21:39.739 { 00:21:39.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.739 "dma_device_type": 2 00:21:39.739 } 00:21:39.739 ], 00:21:39.739 "driver_specific": {} 00:21:39.739 } 00:21:39.739 ] 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:39.739 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.740 "name": "Existed_Raid", 00:21:39.740 "uuid": "e5c1b7f6-a84d-4778-8acb-dffb1a31c6eb", 00:21:39.740 "strip_size_kb": 64, 00:21:39.740 "state": "online", 00:21:39.740 "raid_level": "raid5f", 00:21:39.740 "superblock": true, 00:21:39.740 "num_base_bdevs": 4, 00:21:39.740 "num_base_bdevs_discovered": 4, 00:21:39.740 "num_base_bdevs_operational": 4, 00:21:39.740 "base_bdevs_list": [ 00:21:39.740 { 00:21:39.740 "name": "BaseBdev1", 00:21:39.740 "uuid": "9b46df2e-d2b4-4670-84a5-c23331084699", 00:21:39.740 "is_configured": true, 00:21:39.740 "data_offset": 2048, 00:21:39.740 "data_size": 63488 00:21:39.740 }, 00:21:39.740 { 00:21:39.740 "name": "BaseBdev2", 00:21:39.740 "uuid": "3a38f864-398d-4284-999b-9cee81e69768", 00:21:39.740 "is_configured": true, 00:21:39.740 "data_offset": 2048, 00:21:39.740 "data_size": 63488 00:21:39.740 }, 00:21:39.740 { 00:21:39.740 "name": "BaseBdev3", 00:21:39.740 "uuid": "caa21ccc-8d0c-46c1-8517-bde948f1ed7a", 00:21:39.740 "is_configured": true, 00:21:39.740 "data_offset": 2048, 00:21:39.740 "data_size": 63488 00:21:39.740 }, 00:21:39.740 { 00:21:39.740 "name": "BaseBdev4", 00:21:39.740 "uuid": "10e7faaa-5dd8-44df-afc2-5840aba54d68", 00:21:39.740 "is_configured": true, 00:21:39.740 "data_offset": 2048, 00:21:39.740 "data_size": 63488 00:21:39.740 } 00:21:39.740 ] 00:21:39.740 }' 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.740 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:40.307 [2024-11-27 04:43:27.724130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:40.307 "name": "Existed_Raid", 00:21:40.307 "aliases": [ 00:21:40.307 "e5c1b7f6-a84d-4778-8acb-dffb1a31c6eb" 00:21:40.307 ], 00:21:40.307 "product_name": "Raid Volume", 00:21:40.307 "block_size": 512, 00:21:40.307 "num_blocks": 190464, 00:21:40.307 "uuid": "e5c1b7f6-a84d-4778-8acb-dffb1a31c6eb", 00:21:40.307 "assigned_rate_limits": { 00:21:40.307 "rw_ios_per_sec": 0, 00:21:40.307 "rw_mbytes_per_sec": 0, 00:21:40.307 "r_mbytes_per_sec": 0, 00:21:40.307 "w_mbytes_per_sec": 0 00:21:40.307 }, 00:21:40.307 "claimed": false, 00:21:40.307 "zoned": false, 00:21:40.307 "supported_io_types": { 00:21:40.307 "read": true, 00:21:40.307 "write": true, 00:21:40.307 "unmap": false, 00:21:40.307 "flush": false, 00:21:40.307 "reset": true, 00:21:40.307 "nvme_admin": false, 00:21:40.307 "nvme_io": false, 00:21:40.307 "nvme_io_md": false, 00:21:40.307 "write_zeroes": true, 00:21:40.307 "zcopy": false, 00:21:40.307 "get_zone_info": false, 00:21:40.307 "zone_management": false, 00:21:40.307 "zone_append": false, 00:21:40.307 "compare": false, 00:21:40.307 "compare_and_write": false, 00:21:40.307 "abort": false, 00:21:40.307 "seek_hole": false, 00:21:40.307 "seek_data": false, 00:21:40.307 "copy": false, 00:21:40.307 "nvme_iov_md": false 00:21:40.307 }, 00:21:40.307 "driver_specific": { 00:21:40.307 "raid": { 00:21:40.307 "uuid": "e5c1b7f6-a84d-4778-8acb-dffb1a31c6eb", 00:21:40.307 "strip_size_kb": 64, 00:21:40.307 "state": "online", 00:21:40.307 "raid_level": "raid5f", 00:21:40.307 "superblock": true, 00:21:40.307 "num_base_bdevs": 4, 00:21:40.307 "num_base_bdevs_discovered": 4, 00:21:40.307 "num_base_bdevs_operational": 4, 00:21:40.307 "base_bdevs_list": [ 00:21:40.307 { 00:21:40.307 "name": "BaseBdev1", 00:21:40.307 "uuid": "9b46df2e-d2b4-4670-84a5-c23331084699", 00:21:40.307 "is_configured": true, 00:21:40.307 "data_offset": 2048, 00:21:40.307 "data_size": 63488 00:21:40.307 }, 00:21:40.307 { 00:21:40.307 "name": "BaseBdev2", 00:21:40.307 "uuid": "3a38f864-398d-4284-999b-9cee81e69768", 00:21:40.307 "is_configured": true, 00:21:40.307 "data_offset": 2048, 00:21:40.307 "data_size": 63488 00:21:40.307 }, 00:21:40.307 { 00:21:40.307 "name": "BaseBdev3", 00:21:40.307 "uuid": "caa21ccc-8d0c-46c1-8517-bde948f1ed7a", 00:21:40.307 "is_configured": true, 00:21:40.307 "data_offset": 2048, 00:21:40.307 "data_size": 63488 00:21:40.307 }, 00:21:40.307 { 00:21:40.307 "name": "BaseBdev4", 00:21:40.307 "uuid": "10e7faaa-5dd8-44df-afc2-5840aba54d68", 00:21:40.307 "is_configured": true, 00:21:40.307 "data_offset": 2048, 00:21:40.307 "data_size": 63488 00:21:40.307 } 00:21:40.307 ] 00:21:40.307 } 00:21:40.307 } 00:21:40.307 }' 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:40.307 BaseBdev2 00:21:40.307 BaseBdev3 00:21:40.307 BaseBdev4' 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.307 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.567 04:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.567 [2024-11-27 04:43:28.080051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.567 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.826 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.826 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.826 "name": "Existed_Raid", 00:21:40.826 "uuid": "e5c1b7f6-a84d-4778-8acb-dffb1a31c6eb", 00:21:40.826 "strip_size_kb": 64, 00:21:40.826 "state": "online", 00:21:40.826 "raid_level": "raid5f", 00:21:40.826 "superblock": true, 00:21:40.826 "num_base_bdevs": 4, 00:21:40.826 "num_base_bdevs_discovered": 3, 00:21:40.826 "num_base_bdevs_operational": 3, 00:21:40.826 "base_bdevs_list": [ 00:21:40.826 { 00:21:40.826 "name": null, 00:21:40.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.826 "is_configured": false, 00:21:40.826 "data_offset": 0, 00:21:40.826 "data_size": 63488 00:21:40.826 }, 00:21:40.826 { 00:21:40.826 "name": "BaseBdev2", 00:21:40.826 "uuid": "3a38f864-398d-4284-999b-9cee81e69768", 00:21:40.826 "is_configured": true, 00:21:40.826 "data_offset": 2048, 00:21:40.826 "data_size": 63488 00:21:40.826 }, 00:21:40.826 { 00:21:40.826 "name": "BaseBdev3", 00:21:40.826 "uuid": "caa21ccc-8d0c-46c1-8517-bde948f1ed7a", 00:21:40.826 "is_configured": true, 00:21:40.826 "data_offset": 2048, 00:21:40.826 "data_size": 63488 00:21:40.826 }, 00:21:40.826 { 00:21:40.826 "name": "BaseBdev4", 00:21:40.826 "uuid": "10e7faaa-5dd8-44df-afc2-5840aba54d68", 00:21:40.826 "is_configured": true, 00:21:40.826 "data_offset": 2048, 00:21:40.827 "data_size": 63488 00:21:40.827 } 00:21:40.827 ] 00:21:40.827 }' 00:21:40.827 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.827 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.085 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:41.085 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:41.086 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:41.086 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.086 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.086 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.344 [2024-11-27 04:43:28.744201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:41.344 [2024-11-27 04:43:28.744569] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:41.344 [2024-11-27 04:43:28.831937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.344 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.344 [2024-11-27 04:43:28.891962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:41.603 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.603 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:41.603 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:41.603 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.603 04:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:41.603 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.603 04:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.603 [2024-11-27 04:43:29.035639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:41.603 [2024-11-27 04:43:29.035838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.603 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.861 BaseBdev2 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.861 [ 00:21:41.861 { 00:21:41.861 "name": "BaseBdev2", 00:21:41.861 "aliases": [ 00:21:41.861 "cb9c4488-f22d-48d5-bf0a-d2954741f3ce" 00:21:41.861 ], 00:21:41.861 "product_name": "Malloc disk", 00:21:41.861 "block_size": 512, 00:21:41.861 "num_blocks": 65536, 00:21:41.861 "uuid": "cb9c4488-f22d-48d5-bf0a-d2954741f3ce", 00:21:41.861 "assigned_rate_limits": { 00:21:41.861 "rw_ios_per_sec": 0, 00:21:41.861 "rw_mbytes_per_sec": 0, 00:21:41.861 "r_mbytes_per_sec": 0, 00:21:41.861 "w_mbytes_per_sec": 0 00:21:41.861 }, 00:21:41.861 "claimed": false, 00:21:41.861 "zoned": false, 00:21:41.861 "supported_io_types": { 00:21:41.861 "read": true, 00:21:41.861 "write": true, 00:21:41.861 "unmap": true, 00:21:41.861 "flush": true, 00:21:41.861 "reset": true, 00:21:41.861 "nvme_admin": false, 00:21:41.861 "nvme_io": false, 00:21:41.861 "nvme_io_md": false, 00:21:41.861 "write_zeroes": true, 00:21:41.861 "zcopy": true, 00:21:41.861 "get_zone_info": false, 00:21:41.861 "zone_management": false, 00:21:41.861 "zone_append": false, 00:21:41.861 "compare": false, 00:21:41.861 "compare_and_write": false, 00:21:41.861 "abort": true, 00:21:41.861 "seek_hole": false, 00:21:41.861 "seek_data": false, 00:21:41.861 "copy": true, 00:21:41.861 "nvme_iov_md": false 00:21:41.861 }, 00:21:41.861 "memory_domains": [ 00:21:41.861 { 00:21:41.861 "dma_device_id": "system", 00:21:41.861 "dma_device_type": 1 00:21:41.861 }, 00:21:41.861 { 00:21:41.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.861 "dma_device_type": 2 00:21:41.861 } 00:21:41.861 ], 00:21:41.861 "driver_specific": {} 00:21:41.861 } 00:21:41.861 ] 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.861 BaseBdev3 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.861 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.862 [ 00:21:41.862 { 00:21:41.862 "name": "BaseBdev3", 00:21:41.862 "aliases": [ 00:21:41.862 "08c7e11d-c619-45b3-a405-293897059e1b" 00:21:41.862 ], 00:21:41.862 "product_name": "Malloc disk", 00:21:41.862 "block_size": 512, 00:21:41.862 "num_blocks": 65536, 00:21:41.862 "uuid": "08c7e11d-c619-45b3-a405-293897059e1b", 00:21:41.862 "assigned_rate_limits": { 00:21:41.862 "rw_ios_per_sec": 0, 00:21:41.862 "rw_mbytes_per_sec": 0, 00:21:41.862 "r_mbytes_per_sec": 0, 00:21:41.862 "w_mbytes_per_sec": 0 00:21:41.862 }, 00:21:41.862 "claimed": false, 00:21:41.862 "zoned": false, 00:21:41.862 "supported_io_types": { 00:21:41.862 "read": true, 00:21:41.862 "write": true, 00:21:41.862 "unmap": true, 00:21:41.862 "flush": true, 00:21:41.862 "reset": true, 00:21:41.862 "nvme_admin": false, 00:21:41.862 "nvme_io": false, 00:21:41.862 "nvme_io_md": false, 00:21:41.862 "write_zeroes": true, 00:21:41.862 "zcopy": true, 00:21:41.862 "get_zone_info": false, 00:21:41.862 "zone_management": false, 00:21:41.862 "zone_append": false, 00:21:41.862 "compare": false, 00:21:41.862 "compare_and_write": false, 00:21:41.862 "abort": true, 00:21:41.862 "seek_hole": false, 00:21:41.862 "seek_data": false, 00:21:41.862 "copy": true, 00:21:41.862 "nvme_iov_md": false 00:21:41.862 }, 00:21:41.862 "memory_domains": [ 00:21:41.862 { 00:21:41.862 "dma_device_id": "system", 00:21:41.862 "dma_device_type": 1 00:21:41.862 }, 00:21:41.862 { 00:21:41.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.862 "dma_device_type": 2 00:21:41.862 } 00:21:41.862 ], 00:21:41.862 "driver_specific": {} 00:21:41.862 } 00:21:41.862 ] 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.862 BaseBdev4 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.862 [ 00:21:41.862 { 00:21:41.862 "name": "BaseBdev4", 00:21:41.862 "aliases": [ 00:21:41.862 "75beffb7-92ad-47d4-9720-788c4e376238" 00:21:41.862 ], 00:21:41.862 "product_name": "Malloc disk", 00:21:41.862 "block_size": 512, 00:21:41.862 "num_blocks": 65536, 00:21:41.862 "uuid": "75beffb7-92ad-47d4-9720-788c4e376238", 00:21:41.862 "assigned_rate_limits": { 00:21:41.862 "rw_ios_per_sec": 0, 00:21:41.862 "rw_mbytes_per_sec": 0, 00:21:41.862 "r_mbytes_per_sec": 0, 00:21:41.862 "w_mbytes_per_sec": 0 00:21:41.862 }, 00:21:41.862 "claimed": false, 00:21:41.862 "zoned": false, 00:21:41.862 "supported_io_types": { 00:21:41.862 "read": true, 00:21:41.862 "write": true, 00:21:41.862 "unmap": true, 00:21:41.862 "flush": true, 00:21:41.862 "reset": true, 00:21:41.862 "nvme_admin": false, 00:21:41.862 "nvme_io": false, 00:21:41.862 "nvme_io_md": false, 00:21:41.862 "write_zeroes": true, 00:21:41.862 "zcopy": true, 00:21:41.862 "get_zone_info": false, 00:21:41.862 "zone_management": false, 00:21:41.862 "zone_append": false, 00:21:41.862 "compare": false, 00:21:41.862 "compare_and_write": false, 00:21:41.862 "abort": true, 00:21:41.862 "seek_hole": false, 00:21:41.862 "seek_data": false, 00:21:41.862 "copy": true, 00:21:41.862 "nvme_iov_md": false 00:21:41.862 }, 00:21:41.862 "memory_domains": [ 00:21:41.862 { 00:21:41.862 "dma_device_id": "system", 00:21:41.862 "dma_device_type": 1 00:21:41.862 }, 00:21:41.862 { 00:21:41.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.862 "dma_device_type": 2 00:21:41.862 } 00:21:41.862 ], 00:21:41.862 "driver_specific": {} 00:21:41.862 } 00:21:41.862 ] 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.862 [2024-11-27 04:43:29.411549] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:41.862 [2024-11-27 04:43:29.411605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:41.862 [2024-11-27 04:43:29.411636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:41.862 [2024-11-27 04:43:29.414018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:41.862 [2024-11-27 04:43:29.414095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.862 "name": "Existed_Raid", 00:21:41.862 "uuid": "526bc069-1157-4d8e-83cf-8449a87109ed", 00:21:41.862 "strip_size_kb": 64, 00:21:41.862 "state": "configuring", 00:21:41.862 "raid_level": "raid5f", 00:21:41.862 "superblock": true, 00:21:41.862 "num_base_bdevs": 4, 00:21:41.862 "num_base_bdevs_discovered": 3, 00:21:41.862 "num_base_bdevs_operational": 4, 00:21:41.862 "base_bdevs_list": [ 00:21:41.862 { 00:21:41.862 "name": "BaseBdev1", 00:21:41.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.862 "is_configured": false, 00:21:41.862 "data_offset": 0, 00:21:41.862 "data_size": 0 00:21:41.862 }, 00:21:41.862 { 00:21:41.862 "name": "BaseBdev2", 00:21:41.862 "uuid": "cb9c4488-f22d-48d5-bf0a-d2954741f3ce", 00:21:41.862 "is_configured": true, 00:21:41.862 "data_offset": 2048, 00:21:41.862 "data_size": 63488 00:21:41.862 }, 00:21:41.862 { 00:21:41.862 "name": "BaseBdev3", 00:21:41.862 "uuid": "08c7e11d-c619-45b3-a405-293897059e1b", 00:21:41.862 "is_configured": true, 00:21:41.862 "data_offset": 2048, 00:21:41.862 "data_size": 63488 00:21:41.862 }, 00:21:41.862 { 00:21:41.862 "name": "BaseBdev4", 00:21:41.862 "uuid": "75beffb7-92ad-47d4-9720-788c4e376238", 00:21:41.862 "is_configured": true, 00:21:41.862 "data_offset": 2048, 00:21:41.862 "data_size": 63488 00:21:41.862 } 00:21:41.862 ] 00:21:41.862 }' 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.862 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.429 [2024-11-27 04:43:29.903686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.429 "name": "Existed_Raid", 00:21:42.429 "uuid": "526bc069-1157-4d8e-83cf-8449a87109ed", 00:21:42.429 "strip_size_kb": 64, 00:21:42.429 "state": "configuring", 00:21:42.429 "raid_level": "raid5f", 00:21:42.429 "superblock": true, 00:21:42.429 "num_base_bdevs": 4, 00:21:42.429 "num_base_bdevs_discovered": 2, 00:21:42.429 "num_base_bdevs_operational": 4, 00:21:42.429 "base_bdevs_list": [ 00:21:42.429 { 00:21:42.429 "name": "BaseBdev1", 00:21:42.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.429 "is_configured": false, 00:21:42.429 "data_offset": 0, 00:21:42.429 "data_size": 0 00:21:42.429 }, 00:21:42.429 { 00:21:42.429 "name": null, 00:21:42.429 "uuid": "cb9c4488-f22d-48d5-bf0a-d2954741f3ce", 00:21:42.429 "is_configured": false, 00:21:42.429 "data_offset": 0, 00:21:42.429 "data_size": 63488 00:21:42.429 }, 00:21:42.429 { 00:21:42.429 "name": "BaseBdev3", 00:21:42.429 "uuid": "08c7e11d-c619-45b3-a405-293897059e1b", 00:21:42.429 "is_configured": true, 00:21:42.429 "data_offset": 2048, 00:21:42.429 "data_size": 63488 00:21:42.429 }, 00:21:42.429 { 00:21:42.429 "name": "BaseBdev4", 00:21:42.429 "uuid": "75beffb7-92ad-47d4-9720-788c4e376238", 00:21:42.429 "is_configured": true, 00:21:42.429 "data_offset": 2048, 00:21:42.429 "data_size": 63488 00:21:42.429 } 00:21:42.429 ] 00:21:42.429 }' 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.429 04:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.996 [2024-11-27 04:43:30.518875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:42.996 BaseBdev1 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.996 [ 00:21:42.996 { 00:21:42.996 "name": "BaseBdev1", 00:21:42.996 "aliases": [ 00:21:42.996 "22446a14-dcdc-4da3-9fb7-c0c9bf854419" 00:21:42.996 ], 00:21:42.996 "product_name": "Malloc disk", 00:21:42.996 "block_size": 512, 00:21:42.996 "num_blocks": 65536, 00:21:42.996 "uuid": "22446a14-dcdc-4da3-9fb7-c0c9bf854419", 00:21:42.996 "assigned_rate_limits": { 00:21:42.996 "rw_ios_per_sec": 0, 00:21:42.996 "rw_mbytes_per_sec": 0, 00:21:42.996 "r_mbytes_per_sec": 0, 00:21:42.996 "w_mbytes_per_sec": 0 00:21:42.996 }, 00:21:42.996 "claimed": true, 00:21:42.996 "claim_type": "exclusive_write", 00:21:42.996 "zoned": false, 00:21:42.996 "supported_io_types": { 00:21:42.996 "read": true, 00:21:42.996 "write": true, 00:21:42.996 "unmap": true, 00:21:42.996 "flush": true, 00:21:42.996 "reset": true, 00:21:42.996 "nvme_admin": false, 00:21:42.996 "nvme_io": false, 00:21:42.996 "nvme_io_md": false, 00:21:42.996 "write_zeroes": true, 00:21:42.996 "zcopy": true, 00:21:42.996 "get_zone_info": false, 00:21:42.996 "zone_management": false, 00:21:42.996 "zone_append": false, 00:21:42.996 "compare": false, 00:21:42.996 "compare_and_write": false, 00:21:42.996 "abort": true, 00:21:42.996 "seek_hole": false, 00:21:42.996 "seek_data": false, 00:21:42.996 "copy": true, 00:21:42.996 "nvme_iov_md": false 00:21:42.996 }, 00:21:42.996 "memory_domains": [ 00:21:42.996 { 00:21:42.996 "dma_device_id": "system", 00:21:42.996 "dma_device_type": 1 00:21:42.996 }, 00:21:42.996 { 00:21:42.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.996 "dma_device_type": 2 00:21:42.996 } 00:21:42.996 ], 00:21:42.996 "driver_specific": {} 00:21:42.996 } 00:21:42.996 ] 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.996 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.996 "name": "Existed_Raid", 00:21:42.996 "uuid": "526bc069-1157-4d8e-83cf-8449a87109ed", 00:21:42.996 "strip_size_kb": 64, 00:21:42.996 "state": "configuring", 00:21:42.996 "raid_level": "raid5f", 00:21:42.996 "superblock": true, 00:21:42.996 "num_base_bdevs": 4, 00:21:42.996 "num_base_bdevs_discovered": 3, 00:21:42.996 "num_base_bdevs_operational": 4, 00:21:42.996 "base_bdevs_list": [ 00:21:42.996 { 00:21:42.996 "name": "BaseBdev1", 00:21:42.996 "uuid": "22446a14-dcdc-4da3-9fb7-c0c9bf854419", 00:21:42.996 "is_configured": true, 00:21:42.996 "data_offset": 2048, 00:21:42.996 "data_size": 63488 00:21:42.996 }, 00:21:42.996 { 00:21:42.996 "name": null, 00:21:42.996 "uuid": "cb9c4488-f22d-48d5-bf0a-d2954741f3ce", 00:21:42.996 "is_configured": false, 00:21:42.996 "data_offset": 0, 00:21:42.996 "data_size": 63488 00:21:42.997 }, 00:21:42.997 { 00:21:42.997 "name": "BaseBdev3", 00:21:42.997 "uuid": "08c7e11d-c619-45b3-a405-293897059e1b", 00:21:42.997 "is_configured": true, 00:21:42.997 "data_offset": 2048, 00:21:42.997 "data_size": 63488 00:21:42.997 }, 00:21:42.997 { 00:21:42.997 "name": "BaseBdev4", 00:21:42.997 "uuid": "75beffb7-92ad-47d4-9720-788c4e376238", 00:21:42.997 "is_configured": true, 00:21:42.997 "data_offset": 2048, 00:21:42.997 "data_size": 63488 00:21:42.997 } 00:21:42.997 ] 00:21:42.997 }' 00:21:42.997 04:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.997 04:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.564 [2024-11-27 04:43:31.111156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.564 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.564 "name": "Existed_Raid", 00:21:43.564 "uuid": "526bc069-1157-4d8e-83cf-8449a87109ed", 00:21:43.564 "strip_size_kb": 64, 00:21:43.564 "state": "configuring", 00:21:43.564 "raid_level": "raid5f", 00:21:43.564 "superblock": true, 00:21:43.564 "num_base_bdevs": 4, 00:21:43.564 "num_base_bdevs_discovered": 2, 00:21:43.564 "num_base_bdevs_operational": 4, 00:21:43.564 "base_bdevs_list": [ 00:21:43.564 { 00:21:43.564 "name": "BaseBdev1", 00:21:43.564 "uuid": "22446a14-dcdc-4da3-9fb7-c0c9bf854419", 00:21:43.564 "is_configured": true, 00:21:43.564 "data_offset": 2048, 00:21:43.564 "data_size": 63488 00:21:43.564 }, 00:21:43.564 { 00:21:43.565 "name": null, 00:21:43.565 "uuid": "cb9c4488-f22d-48d5-bf0a-d2954741f3ce", 00:21:43.565 "is_configured": false, 00:21:43.565 "data_offset": 0, 00:21:43.565 "data_size": 63488 00:21:43.565 }, 00:21:43.565 { 00:21:43.565 "name": null, 00:21:43.565 "uuid": "08c7e11d-c619-45b3-a405-293897059e1b", 00:21:43.565 "is_configured": false, 00:21:43.565 "data_offset": 0, 00:21:43.565 "data_size": 63488 00:21:43.565 }, 00:21:43.565 { 00:21:43.565 "name": "BaseBdev4", 00:21:43.565 "uuid": "75beffb7-92ad-47d4-9720-788c4e376238", 00:21:43.565 "is_configured": true, 00:21:43.565 "data_offset": 2048, 00:21:43.565 "data_size": 63488 00:21:43.565 } 00:21:43.565 ] 00:21:43.565 }' 00:21:43.565 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.565 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.132 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:44.132 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.132 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.132 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.132 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.132 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:44.132 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.133 [2024-11-27 04:43:31.695396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.133 "name": "Existed_Raid", 00:21:44.133 "uuid": "526bc069-1157-4d8e-83cf-8449a87109ed", 00:21:44.133 "strip_size_kb": 64, 00:21:44.133 "state": "configuring", 00:21:44.133 "raid_level": "raid5f", 00:21:44.133 "superblock": true, 00:21:44.133 "num_base_bdevs": 4, 00:21:44.133 "num_base_bdevs_discovered": 3, 00:21:44.133 "num_base_bdevs_operational": 4, 00:21:44.133 "base_bdevs_list": [ 00:21:44.133 { 00:21:44.133 "name": "BaseBdev1", 00:21:44.133 "uuid": "22446a14-dcdc-4da3-9fb7-c0c9bf854419", 00:21:44.133 "is_configured": true, 00:21:44.133 "data_offset": 2048, 00:21:44.133 "data_size": 63488 00:21:44.133 }, 00:21:44.133 { 00:21:44.133 "name": null, 00:21:44.133 "uuid": "cb9c4488-f22d-48d5-bf0a-d2954741f3ce", 00:21:44.133 "is_configured": false, 00:21:44.133 "data_offset": 0, 00:21:44.133 "data_size": 63488 00:21:44.133 }, 00:21:44.133 { 00:21:44.133 "name": "BaseBdev3", 00:21:44.133 "uuid": "08c7e11d-c619-45b3-a405-293897059e1b", 00:21:44.133 "is_configured": true, 00:21:44.133 "data_offset": 2048, 00:21:44.133 "data_size": 63488 00:21:44.133 }, 00:21:44.133 { 00:21:44.133 "name": "BaseBdev4", 00:21:44.133 "uuid": "75beffb7-92ad-47d4-9720-788c4e376238", 00:21:44.133 "is_configured": true, 00:21:44.133 "data_offset": 2048, 00:21:44.133 "data_size": 63488 00:21:44.133 } 00:21:44.133 ] 00:21:44.133 }' 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.133 04:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.698 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.698 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.698 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:44.698 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.698 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.698 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:44.698 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:44.698 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.698 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.698 [2024-11-27 04:43:32.255558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.956 "name": "Existed_Raid", 00:21:44.956 "uuid": "526bc069-1157-4d8e-83cf-8449a87109ed", 00:21:44.956 "strip_size_kb": 64, 00:21:44.956 "state": "configuring", 00:21:44.956 "raid_level": "raid5f", 00:21:44.956 "superblock": true, 00:21:44.956 "num_base_bdevs": 4, 00:21:44.956 "num_base_bdevs_discovered": 2, 00:21:44.956 "num_base_bdevs_operational": 4, 00:21:44.956 "base_bdevs_list": [ 00:21:44.956 { 00:21:44.956 "name": null, 00:21:44.956 "uuid": "22446a14-dcdc-4da3-9fb7-c0c9bf854419", 00:21:44.956 "is_configured": false, 00:21:44.956 "data_offset": 0, 00:21:44.956 "data_size": 63488 00:21:44.956 }, 00:21:44.956 { 00:21:44.956 "name": null, 00:21:44.956 "uuid": "cb9c4488-f22d-48d5-bf0a-d2954741f3ce", 00:21:44.956 "is_configured": false, 00:21:44.956 "data_offset": 0, 00:21:44.956 "data_size": 63488 00:21:44.956 }, 00:21:44.956 { 00:21:44.956 "name": "BaseBdev3", 00:21:44.956 "uuid": "08c7e11d-c619-45b3-a405-293897059e1b", 00:21:44.956 "is_configured": true, 00:21:44.956 "data_offset": 2048, 00:21:44.956 "data_size": 63488 00:21:44.956 }, 00:21:44.956 { 00:21:44.956 "name": "BaseBdev4", 00:21:44.956 "uuid": "75beffb7-92ad-47d4-9720-788c4e376238", 00:21:44.956 "is_configured": true, 00:21:44.956 "data_offset": 2048, 00:21:44.956 "data_size": 63488 00:21:44.956 } 00:21:44.956 ] 00:21:44.956 }' 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.956 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.522 [2024-11-27 04:43:32.905258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.522 "name": "Existed_Raid", 00:21:45.522 "uuid": "526bc069-1157-4d8e-83cf-8449a87109ed", 00:21:45.522 "strip_size_kb": 64, 00:21:45.522 "state": "configuring", 00:21:45.522 "raid_level": "raid5f", 00:21:45.522 "superblock": true, 00:21:45.522 "num_base_bdevs": 4, 00:21:45.522 "num_base_bdevs_discovered": 3, 00:21:45.522 "num_base_bdevs_operational": 4, 00:21:45.522 "base_bdevs_list": [ 00:21:45.522 { 00:21:45.522 "name": null, 00:21:45.522 "uuid": "22446a14-dcdc-4da3-9fb7-c0c9bf854419", 00:21:45.522 "is_configured": false, 00:21:45.522 "data_offset": 0, 00:21:45.522 "data_size": 63488 00:21:45.522 }, 00:21:45.522 { 00:21:45.522 "name": "BaseBdev2", 00:21:45.522 "uuid": "cb9c4488-f22d-48d5-bf0a-d2954741f3ce", 00:21:45.522 "is_configured": true, 00:21:45.522 "data_offset": 2048, 00:21:45.522 "data_size": 63488 00:21:45.522 }, 00:21:45.522 { 00:21:45.522 "name": "BaseBdev3", 00:21:45.522 "uuid": "08c7e11d-c619-45b3-a405-293897059e1b", 00:21:45.522 "is_configured": true, 00:21:45.522 "data_offset": 2048, 00:21:45.522 "data_size": 63488 00:21:45.522 }, 00:21:45.522 { 00:21:45.522 "name": "BaseBdev4", 00:21:45.522 "uuid": "75beffb7-92ad-47d4-9720-788c4e376238", 00:21:45.522 "is_configured": true, 00:21:45.522 "data_offset": 2048, 00:21:45.522 "data_size": 63488 00:21:45.522 } 00:21:45.522 ] 00:21:45.522 }' 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.522 04:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 22446a14-dcdc-4da3-9fb7-c0c9bf854419 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.089 [2024-11-27 04:43:33.587383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:46.089 NewBaseBdev 00:21:46.089 [2024-11-27 04:43:33.587877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:46.089 [2024-11-27 04:43:33.587902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:46.089 [2024-11-27 04:43:33.588230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:46.089 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.090 [2024-11-27 04:43:33.594823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:46.090 [2024-11-27 04:43:33.594857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:46.090 [2024-11-27 04:43:33.595152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.090 [ 00:21:46.090 { 00:21:46.090 "name": "NewBaseBdev", 00:21:46.090 "aliases": [ 00:21:46.090 "22446a14-dcdc-4da3-9fb7-c0c9bf854419" 00:21:46.090 ], 00:21:46.090 "product_name": "Malloc disk", 00:21:46.090 "block_size": 512, 00:21:46.090 "num_blocks": 65536, 00:21:46.090 "uuid": "22446a14-dcdc-4da3-9fb7-c0c9bf854419", 00:21:46.090 "assigned_rate_limits": { 00:21:46.090 "rw_ios_per_sec": 0, 00:21:46.090 "rw_mbytes_per_sec": 0, 00:21:46.090 "r_mbytes_per_sec": 0, 00:21:46.090 "w_mbytes_per_sec": 0 00:21:46.090 }, 00:21:46.090 "claimed": true, 00:21:46.090 "claim_type": "exclusive_write", 00:21:46.090 "zoned": false, 00:21:46.090 "supported_io_types": { 00:21:46.090 "read": true, 00:21:46.090 "write": true, 00:21:46.090 "unmap": true, 00:21:46.090 "flush": true, 00:21:46.090 "reset": true, 00:21:46.090 "nvme_admin": false, 00:21:46.090 "nvme_io": false, 00:21:46.090 "nvme_io_md": false, 00:21:46.090 "write_zeroes": true, 00:21:46.090 "zcopy": true, 00:21:46.090 "get_zone_info": false, 00:21:46.090 "zone_management": false, 00:21:46.090 "zone_append": false, 00:21:46.090 "compare": false, 00:21:46.090 "compare_and_write": false, 00:21:46.090 "abort": true, 00:21:46.090 "seek_hole": false, 00:21:46.090 "seek_data": false, 00:21:46.090 "copy": true, 00:21:46.090 "nvme_iov_md": false 00:21:46.090 }, 00:21:46.090 "memory_domains": [ 00:21:46.090 { 00:21:46.090 "dma_device_id": "system", 00:21:46.090 "dma_device_type": 1 00:21:46.090 }, 00:21:46.090 { 00:21:46.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.090 "dma_device_type": 2 00:21:46.090 } 00:21:46.090 ], 00:21:46.090 "driver_specific": {} 00:21:46.090 } 00:21:46.090 ] 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.090 "name": "Existed_Raid", 00:21:46.090 "uuid": "526bc069-1157-4d8e-83cf-8449a87109ed", 00:21:46.090 "strip_size_kb": 64, 00:21:46.090 "state": "online", 00:21:46.090 "raid_level": "raid5f", 00:21:46.090 "superblock": true, 00:21:46.090 "num_base_bdevs": 4, 00:21:46.090 "num_base_bdevs_discovered": 4, 00:21:46.090 "num_base_bdevs_operational": 4, 00:21:46.090 "base_bdevs_list": [ 00:21:46.090 { 00:21:46.090 "name": "NewBaseBdev", 00:21:46.090 "uuid": "22446a14-dcdc-4da3-9fb7-c0c9bf854419", 00:21:46.090 "is_configured": true, 00:21:46.090 "data_offset": 2048, 00:21:46.090 "data_size": 63488 00:21:46.090 }, 00:21:46.090 { 00:21:46.090 "name": "BaseBdev2", 00:21:46.090 "uuid": "cb9c4488-f22d-48d5-bf0a-d2954741f3ce", 00:21:46.090 "is_configured": true, 00:21:46.090 "data_offset": 2048, 00:21:46.090 "data_size": 63488 00:21:46.090 }, 00:21:46.090 { 00:21:46.090 "name": "BaseBdev3", 00:21:46.090 "uuid": "08c7e11d-c619-45b3-a405-293897059e1b", 00:21:46.090 "is_configured": true, 00:21:46.090 "data_offset": 2048, 00:21:46.090 "data_size": 63488 00:21:46.090 }, 00:21:46.090 { 00:21:46.090 "name": "BaseBdev4", 00:21:46.090 "uuid": "75beffb7-92ad-47d4-9720-788c4e376238", 00:21:46.090 "is_configured": true, 00:21:46.090 "data_offset": 2048, 00:21:46.090 "data_size": 63488 00:21:46.090 } 00:21:46.090 ] 00:21:46.090 }' 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.090 04:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:46.656 [2024-11-27 04:43:34.146873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.656 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:46.656 "name": "Existed_Raid", 00:21:46.656 "aliases": [ 00:21:46.656 "526bc069-1157-4d8e-83cf-8449a87109ed" 00:21:46.656 ], 00:21:46.656 "product_name": "Raid Volume", 00:21:46.656 "block_size": 512, 00:21:46.656 "num_blocks": 190464, 00:21:46.656 "uuid": "526bc069-1157-4d8e-83cf-8449a87109ed", 00:21:46.656 "assigned_rate_limits": { 00:21:46.656 "rw_ios_per_sec": 0, 00:21:46.656 "rw_mbytes_per_sec": 0, 00:21:46.656 "r_mbytes_per_sec": 0, 00:21:46.656 "w_mbytes_per_sec": 0 00:21:46.656 }, 00:21:46.656 "claimed": false, 00:21:46.656 "zoned": false, 00:21:46.656 "supported_io_types": { 00:21:46.656 "read": true, 00:21:46.656 "write": true, 00:21:46.656 "unmap": false, 00:21:46.656 "flush": false, 00:21:46.656 "reset": true, 00:21:46.656 "nvme_admin": false, 00:21:46.656 "nvme_io": false, 00:21:46.656 "nvme_io_md": false, 00:21:46.656 "write_zeroes": true, 00:21:46.656 "zcopy": false, 00:21:46.656 "get_zone_info": false, 00:21:46.656 "zone_management": false, 00:21:46.656 "zone_append": false, 00:21:46.656 "compare": false, 00:21:46.656 "compare_and_write": false, 00:21:46.656 "abort": false, 00:21:46.656 "seek_hole": false, 00:21:46.656 "seek_data": false, 00:21:46.656 "copy": false, 00:21:46.656 "nvme_iov_md": false 00:21:46.656 }, 00:21:46.656 "driver_specific": { 00:21:46.656 "raid": { 00:21:46.656 "uuid": "526bc069-1157-4d8e-83cf-8449a87109ed", 00:21:46.656 "strip_size_kb": 64, 00:21:46.656 "state": "online", 00:21:46.656 "raid_level": "raid5f", 00:21:46.656 "superblock": true, 00:21:46.656 "num_base_bdevs": 4, 00:21:46.656 "num_base_bdevs_discovered": 4, 00:21:46.656 "num_base_bdevs_operational": 4, 00:21:46.656 "base_bdevs_list": [ 00:21:46.656 { 00:21:46.656 "name": "NewBaseBdev", 00:21:46.656 "uuid": "22446a14-dcdc-4da3-9fb7-c0c9bf854419", 00:21:46.656 "is_configured": true, 00:21:46.656 "data_offset": 2048, 00:21:46.656 "data_size": 63488 00:21:46.656 }, 00:21:46.656 { 00:21:46.656 "name": "BaseBdev2", 00:21:46.656 "uuid": "cb9c4488-f22d-48d5-bf0a-d2954741f3ce", 00:21:46.656 "is_configured": true, 00:21:46.656 "data_offset": 2048, 00:21:46.656 "data_size": 63488 00:21:46.656 }, 00:21:46.656 { 00:21:46.656 "name": "BaseBdev3", 00:21:46.656 "uuid": "08c7e11d-c619-45b3-a405-293897059e1b", 00:21:46.657 "is_configured": true, 00:21:46.657 "data_offset": 2048, 00:21:46.657 "data_size": 63488 00:21:46.657 }, 00:21:46.657 { 00:21:46.657 "name": "BaseBdev4", 00:21:46.657 "uuid": "75beffb7-92ad-47d4-9720-788c4e376238", 00:21:46.657 "is_configured": true, 00:21:46.657 "data_offset": 2048, 00:21:46.657 "data_size": 63488 00:21:46.657 } 00:21:46.657 ] 00:21:46.657 } 00:21:46.657 } 00:21:46.657 }' 00:21:46.657 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:46.657 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:46.657 BaseBdev2 00:21:46.657 BaseBdev3 00:21:46.657 BaseBdev4' 00:21:46.657 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.915 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.915 [2024-11-27 04:43:34.490606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:46.916 [2024-11-27 04:43:34.490766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.916 [2024-11-27 04:43:34.490969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.916 [2024-11-27 04:43:34.491479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.916 [2024-11-27 04:43:34.491507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83987 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83987 ']' 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83987 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83987 00:21:46.916 killing process with pid 83987 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83987' 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83987 00:21:46.916 [2024-11-27 04:43:34.523106] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:46.916 04:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83987 00:21:47.482 [2024-11-27 04:43:34.868666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:48.418 ************************************ 00:21:48.418 END TEST raid5f_state_function_test_sb 00:21:48.418 ************************************ 00:21:48.418 04:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:48.418 00:21:48.418 real 0m12.841s 00:21:48.418 user 0m21.277s 00:21:48.418 sys 0m1.797s 00:21:48.418 04:43:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.418 04:43:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.418 04:43:35 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:21:48.418 04:43:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:48.418 04:43:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.418 04:43:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:48.418 ************************************ 00:21:48.418 START TEST raid5f_superblock_test 00:21:48.418 ************************************ 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:48.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84669 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84669 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84669 ']' 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.418 04:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.675 [2024-11-27 04:43:36.084729] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:21:48.675 [2024-11-27 04:43:36.084930] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84669 ] 00:21:48.675 [2024-11-27 04:43:36.267576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.932 [2024-11-27 04:43:36.417504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.190 [2024-11-27 04:43:36.631825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.190 [2024-11-27 04:43:36.631862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.757 malloc1 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.757 [2024-11-27 04:43:37.189891] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:49.757 [2024-11-27 04:43:37.190099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.757 [2024-11-27 04:43:37.190188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:49.757 [2024-11-27 04:43:37.190328] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.757 [2024-11-27 04:43:37.193322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.757 [2024-11-27 04:43:37.193495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:49.757 pt1 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.757 malloc2 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.757 [2024-11-27 04:43:37.237961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:49.757 [2024-11-27 04:43:37.238033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.757 [2024-11-27 04:43:37.238072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:49.757 [2024-11-27 04:43:37.238087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.757 [2024-11-27 04:43:37.240838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.757 [2024-11-27 04:43:37.240902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:49.757 pt2 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.757 malloc3 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.757 [2024-11-27 04:43:37.297112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:49.757 [2024-11-27 04:43:37.297303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.757 [2024-11-27 04:43:37.297380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:49.757 [2024-11-27 04:43:37.297571] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.757 [2024-11-27 04:43:37.300325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.757 [2024-11-27 04:43:37.300481] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:49.757 pt3 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.757 malloc4 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.757 [2024-11-27 04:43:37.348670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:49.757 [2024-11-27 04:43:37.348877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.757 [2024-11-27 04:43:37.348952] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:49.757 [2024-11-27 04:43:37.349059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.757 [2024-11-27 04:43:37.351828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.757 [2024-11-27 04:43:37.351980] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:49.757 pt4 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.757 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.757 [2024-11-27 04:43:37.356753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:49.757 [2024-11-27 04:43:37.359134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:49.757 [2024-11-27 04:43:37.359385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:49.757 [2024-11-27 04:43:37.359468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:49.757 [2024-11-27 04:43:37.359728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:49.758 [2024-11-27 04:43:37.359752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:49.758 [2024-11-27 04:43:37.360073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:49.758 [2024-11-27 04:43:37.366888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:49.758 [2024-11-27 04:43:37.367024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:49.758 [2024-11-27 04:43:37.367431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.758 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.016 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.016 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.016 "name": "raid_bdev1", 00:21:50.016 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:50.016 "strip_size_kb": 64, 00:21:50.016 "state": "online", 00:21:50.016 "raid_level": "raid5f", 00:21:50.016 "superblock": true, 00:21:50.016 "num_base_bdevs": 4, 00:21:50.016 "num_base_bdevs_discovered": 4, 00:21:50.016 "num_base_bdevs_operational": 4, 00:21:50.016 "base_bdevs_list": [ 00:21:50.016 { 00:21:50.016 "name": "pt1", 00:21:50.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:50.016 "is_configured": true, 00:21:50.016 "data_offset": 2048, 00:21:50.016 "data_size": 63488 00:21:50.016 }, 00:21:50.016 { 00:21:50.016 "name": "pt2", 00:21:50.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:50.016 "is_configured": true, 00:21:50.016 "data_offset": 2048, 00:21:50.016 "data_size": 63488 00:21:50.016 }, 00:21:50.016 { 00:21:50.016 "name": "pt3", 00:21:50.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:50.016 "is_configured": true, 00:21:50.016 "data_offset": 2048, 00:21:50.016 "data_size": 63488 00:21:50.016 }, 00:21:50.016 { 00:21:50.016 "name": "pt4", 00:21:50.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:50.016 "is_configured": true, 00:21:50.016 "data_offset": 2048, 00:21:50.016 "data_size": 63488 00:21:50.016 } 00:21:50.016 ] 00:21:50.016 }' 00:21:50.016 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.016 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.275 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:50.275 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:50.275 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:50.275 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:50.275 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:50.275 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:50.275 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:50.275 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.275 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:50.275 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.275 [2024-11-27 04:43:37.895171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.534 04:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.534 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:50.534 "name": "raid_bdev1", 00:21:50.534 "aliases": [ 00:21:50.534 "2017ace7-c19e-4983-b3bf-9bd762fa590d" 00:21:50.534 ], 00:21:50.534 "product_name": "Raid Volume", 00:21:50.534 "block_size": 512, 00:21:50.534 "num_blocks": 190464, 00:21:50.534 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:50.534 "assigned_rate_limits": { 00:21:50.534 "rw_ios_per_sec": 0, 00:21:50.534 "rw_mbytes_per_sec": 0, 00:21:50.534 "r_mbytes_per_sec": 0, 00:21:50.534 "w_mbytes_per_sec": 0 00:21:50.534 }, 00:21:50.534 "claimed": false, 00:21:50.534 "zoned": false, 00:21:50.534 "supported_io_types": { 00:21:50.534 "read": true, 00:21:50.534 "write": true, 00:21:50.534 "unmap": false, 00:21:50.534 "flush": false, 00:21:50.534 "reset": true, 00:21:50.534 "nvme_admin": false, 00:21:50.534 "nvme_io": false, 00:21:50.534 "nvme_io_md": false, 00:21:50.534 "write_zeroes": true, 00:21:50.534 "zcopy": false, 00:21:50.534 "get_zone_info": false, 00:21:50.534 "zone_management": false, 00:21:50.534 "zone_append": false, 00:21:50.534 "compare": false, 00:21:50.534 "compare_and_write": false, 00:21:50.534 "abort": false, 00:21:50.534 "seek_hole": false, 00:21:50.534 "seek_data": false, 00:21:50.534 "copy": false, 00:21:50.534 "nvme_iov_md": false 00:21:50.534 }, 00:21:50.534 "driver_specific": { 00:21:50.534 "raid": { 00:21:50.534 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:50.534 "strip_size_kb": 64, 00:21:50.534 "state": "online", 00:21:50.534 "raid_level": "raid5f", 00:21:50.534 "superblock": true, 00:21:50.534 "num_base_bdevs": 4, 00:21:50.534 "num_base_bdevs_discovered": 4, 00:21:50.534 "num_base_bdevs_operational": 4, 00:21:50.534 "base_bdevs_list": [ 00:21:50.534 { 00:21:50.534 "name": "pt1", 00:21:50.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:50.534 "is_configured": true, 00:21:50.534 "data_offset": 2048, 00:21:50.534 "data_size": 63488 00:21:50.534 }, 00:21:50.534 { 00:21:50.534 "name": "pt2", 00:21:50.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:50.534 "is_configured": true, 00:21:50.534 "data_offset": 2048, 00:21:50.534 "data_size": 63488 00:21:50.534 }, 00:21:50.534 { 00:21:50.534 "name": "pt3", 00:21:50.534 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:50.534 "is_configured": true, 00:21:50.534 "data_offset": 2048, 00:21:50.534 "data_size": 63488 00:21:50.534 }, 00:21:50.534 { 00:21:50.534 "name": "pt4", 00:21:50.534 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:50.534 "is_configured": true, 00:21:50.534 "data_offset": 2048, 00:21:50.534 "data_size": 63488 00:21:50.534 } 00:21:50.534 ] 00:21:50.534 } 00:21:50.534 } 00:21:50.534 }' 00:21:50.534 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:50.534 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:50.534 pt2 00:21:50.534 pt3 00:21:50.534 pt4' 00:21:50.534 04:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.534 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:50.792 [2024-11-27 04:43:38.259190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2017ace7-c19e-4983-b3bf-9bd762fa590d 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2017ace7-c19e-4983-b3bf-9bd762fa590d ']' 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 [2024-11-27 04:43:38.306981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:50.792 [2024-11-27 04:43:38.307128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:50.792 [2024-11-27 04:43:38.307380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:50.792 [2024-11-27 04:43:38.307502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:50.792 [2024-11-27 04:43:38.307527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.792 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.051 [2024-11-27 04:43:38.455042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:51.051 [2024-11-27 04:43:38.457612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:51.051 [2024-11-27 04:43:38.457681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:51.051 [2024-11-27 04:43:38.457736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:51.051 [2024-11-27 04:43:38.457829] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:51.051 [2024-11-27 04:43:38.457902] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:51.051 [2024-11-27 04:43:38.457948] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:51.051 [2024-11-27 04:43:38.457979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:51.051 [2024-11-27 04:43:38.458001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:51.051 [2024-11-27 04:43:38.458017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:51.051 request: 00:21:51.051 { 00:21:51.051 "name": "raid_bdev1", 00:21:51.051 "raid_level": "raid5f", 00:21:51.051 "base_bdevs": [ 00:21:51.051 "malloc1", 00:21:51.051 "malloc2", 00:21:51.051 "malloc3", 00:21:51.051 "malloc4" 00:21:51.051 ], 00:21:51.051 "strip_size_kb": 64, 00:21:51.051 "superblock": false, 00:21:51.051 "method": "bdev_raid_create", 00:21:51.051 "req_id": 1 00:21:51.051 } 00:21:51.051 Got JSON-RPC error response 00:21:51.051 response: 00:21:51.051 { 00:21:51.051 "code": -17, 00:21:51.051 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:51.051 } 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.051 [2024-11-27 04:43:38.515020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:51.051 [2024-11-27 04:43:38.515212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.051 [2024-11-27 04:43:38.515286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:51.051 [2024-11-27 04:43:38.515491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.051 [2024-11-27 04:43:38.518310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.051 [2024-11-27 04:43:38.518363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:51.051 [2024-11-27 04:43:38.518460] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:51.051 [2024-11-27 04:43:38.518555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:51.051 pt1 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.051 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.052 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.052 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.052 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.052 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.052 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.052 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.052 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.052 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.052 "name": "raid_bdev1", 00:21:51.052 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:51.052 "strip_size_kb": 64, 00:21:51.052 "state": "configuring", 00:21:51.052 "raid_level": "raid5f", 00:21:51.052 "superblock": true, 00:21:51.052 "num_base_bdevs": 4, 00:21:51.052 "num_base_bdevs_discovered": 1, 00:21:51.052 "num_base_bdevs_operational": 4, 00:21:51.052 "base_bdevs_list": [ 00:21:51.052 { 00:21:51.052 "name": "pt1", 00:21:51.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:51.052 "is_configured": true, 00:21:51.052 "data_offset": 2048, 00:21:51.052 "data_size": 63488 00:21:51.052 }, 00:21:51.052 { 00:21:51.052 "name": null, 00:21:51.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:51.052 "is_configured": false, 00:21:51.052 "data_offset": 2048, 00:21:51.052 "data_size": 63488 00:21:51.052 }, 00:21:51.052 { 00:21:51.052 "name": null, 00:21:51.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:51.052 "is_configured": false, 00:21:51.052 "data_offset": 2048, 00:21:51.052 "data_size": 63488 00:21:51.052 }, 00:21:51.052 { 00:21:51.052 "name": null, 00:21:51.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:51.052 "is_configured": false, 00:21:51.052 "data_offset": 2048, 00:21:51.052 "data_size": 63488 00:21:51.052 } 00:21:51.052 ] 00:21:51.052 }' 00:21:51.052 04:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.052 04:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.665 [2024-11-27 04:43:39.047236] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:51.665 [2024-11-27 04:43:39.047928] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.665 [2024-11-27 04:43:39.047970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:51.665 [2024-11-27 04:43:39.047989] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.665 [2024-11-27 04:43:39.048551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.665 [2024-11-27 04:43:39.048588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:51.665 [2024-11-27 04:43:39.048694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:51.665 [2024-11-27 04:43:39.048733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:51.665 pt2 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.665 [2024-11-27 04:43:39.059257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.665 "name": "raid_bdev1", 00:21:51.665 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:51.665 "strip_size_kb": 64, 00:21:51.665 "state": "configuring", 00:21:51.665 "raid_level": "raid5f", 00:21:51.665 "superblock": true, 00:21:51.665 "num_base_bdevs": 4, 00:21:51.665 "num_base_bdevs_discovered": 1, 00:21:51.665 "num_base_bdevs_operational": 4, 00:21:51.665 "base_bdevs_list": [ 00:21:51.665 { 00:21:51.665 "name": "pt1", 00:21:51.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:51.665 "is_configured": true, 00:21:51.665 "data_offset": 2048, 00:21:51.665 "data_size": 63488 00:21:51.665 }, 00:21:51.665 { 00:21:51.665 "name": null, 00:21:51.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:51.665 "is_configured": false, 00:21:51.665 "data_offset": 0, 00:21:51.665 "data_size": 63488 00:21:51.665 }, 00:21:51.665 { 00:21:51.665 "name": null, 00:21:51.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:51.665 "is_configured": false, 00:21:51.665 "data_offset": 2048, 00:21:51.665 "data_size": 63488 00:21:51.665 }, 00:21:51.665 { 00:21:51.665 "name": null, 00:21:51.665 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:51.665 "is_configured": false, 00:21:51.665 "data_offset": 2048, 00:21:51.665 "data_size": 63488 00:21:51.665 } 00:21:51.665 ] 00:21:51.665 }' 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.665 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.237 [2024-11-27 04:43:39.599336] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:52.237 [2024-11-27 04:43:39.599422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.237 [2024-11-27 04:43:39.599453] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:52.237 [2024-11-27 04:43:39.599469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.237 [2024-11-27 04:43:39.600060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.237 [2024-11-27 04:43:39.600093] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:52.237 [2024-11-27 04:43:39.600201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:52.237 [2024-11-27 04:43:39.600234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:52.237 pt2 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.237 [2024-11-27 04:43:39.607298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:52.237 [2024-11-27 04:43:39.607502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.237 [2024-11-27 04:43:39.607579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:52.237 [2024-11-27 04:43:39.607707] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.237 [2024-11-27 04:43:39.608210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.237 [2024-11-27 04:43:39.608367] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:52.237 [2024-11-27 04:43:39.608567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:52.237 [2024-11-27 04:43:39.608612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:52.237 pt3 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.237 [2024-11-27 04:43:39.615269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:52.237 [2024-11-27 04:43:39.615424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.237 [2024-11-27 04:43:39.615492] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:52.237 [2024-11-27 04:43:39.615613] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.237 [2024-11-27 04:43:39.616137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.237 [2024-11-27 04:43:39.616293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:52.237 [2024-11-27 04:43:39.616476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:52.237 [2024-11-27 04:43:39.616617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:52.237 [2024-11-27 04:43:39.616836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:52.237 [2024-11-27 04:43:39.616854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:52.237 [2024-11-27 04:43:39.617152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:52.237 [2024-11-27 04:43:39.623718] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:52.237 [2024-11-27 04:43:39.623868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:52.237 [2024-11-27 04:43:39.624216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.237 pt4 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:52.237 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.238 "name": "raid_bdev1", 00:21:52.238 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:52.238 "strip_size_kb": 64, 00:21:52.238 "state": "online", 00:21:52.238 "raid_level": "raid5f", 00:21:52.238 "superblock": true, 00:21:52.238 "num_base_bdevs": 4, 00:21:52.238 "num_base_bdevs_discovered": 4, 00:21:52.238 "num_base_bdevs_operational": 4, 00:21:52.238 "base_bdevs_list": [ 00:21:52.238 { 00:21:52.238 "name": "pt1", 00:21:52.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:52.238 "is_configured": true, 00:21:52.238 "data_offset": 2048, 00:21:52.238 "data_size": 63488 00:21:52.238 }, 00:21:52.238 { 00:21:52.238 "name": "pt2", 00:21:52.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:52.238 "is_configured": true, 00:21:52.238 "data_offset": 2048, 00:21:52.238 "data_size": 63488 00:21:52.238 }, 00:21:52.238 { 00:21:52.238 "name": "pt3", 00:21:52.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:52.238 "is_configured": true, 00:21:52.238 "data_offset": 2048, 00:21:52.238 "data_size": 63488 00:21:52.238 }, 00:21:52.238 { 00:21:52.238 "name": "pt4", 00:21:52.238 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:52.238 "is_configured": true, 00:21:52.238 "data_offset": 2048, 00:21:52.238 "data_size": 63488 00:21:52.238 } 00:21:52.238 ] 00:21:52.238 }' 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.238 04:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.805 [2024-11-27 04:43:40.172072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.805 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:52.805 "name": "raid_bdev1", 00:21:52.805 "aliases": [ 00:21:52.805 "2017ace7-c19e-4983-b3bf-9bd762fa590d" 00:21:52.805 ], 00:21:52.805 "product_name": "Raid Volume", 00:21:52.805 "block_size": 512, 00:21:52.805 "num_blocks": 190464, 00:21:52.805 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:52.805 "assigned_rate_limits": { 00:21:52.805 "rw_ios_per_sec": 0, 00:21:52.805 "rw_mbytes_per_sec": 0, 00:21:52.805 "r_mbytes_per_sec": 0, 00:21:52.805 "w_mbytes_per_sec": 0 00:21:52.805 }, 00:21:52.805 "claimed": false, 00:21:52.805 "zoned": false, 00:21:52.805 "supported_io_types": { 00:21:52.805 "read": true, 00:21:52.805 "write": true, 00:21:52.805 "unmap": false, 00:21:52.805 "flush": false, 00:21:52.805 "reset": true, 00:21:52.805 "nvme_admin": false, 00:21:52.805 "nvme_io": false, 00:21:52.805 "nvme_io_md": false, 00:21:52.805 "write_zeroes": true, 00:21:52.805 "zcopy": false, 00:21:52.805 "get_zone_info": false, 00:21:52.805 "zone_management": false, 00:21:52.805 "zone_append": false, 00:21:52.805 "compare": false, 00:21:52.805 "compare_and_write": false, 00:21:52.805 "abort": false, 00:21:52.805 "seek_hole": false, 00:21:52.805 "seek_data": false, 00:21:52.805 "copy": false, 00:21:52.805 "nvme_iov_md": false 00:21:52.805 }, 00:21:52.805 "driver_specific": { 00:21:52.805 "raid": { 00:21:52.805 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:52.805 "strip_size_kb": 64, 00:21:52.805 "state": "online", 00:21:52.805 "raid_level": "raid5f", 00:21:52.805 "superblock": true, 00:21:52.805 "num_base_bdevs": 4, 00:21:52.805 "num_base_bdevs_discovered": 4, 00:21:52.805 "num_base_bdevs_operational": 4, 00:21:52.805 "base_bdevs_list": [ 00:21:52.805 { 00:21:52.805 "name": "pt1", 00:21:52.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:52.805 "is_configured": true, 00:21:52.805 "data_offset": 2048, 00:21:52.805 "data_size": 63488 00:21:52.805 }, 00:21:52.805 { 00:21:52.805 "name": "pt2", 00:21:52.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:52.805 "is_configured": true, 00:21:52.805 "data_offset": 2048, 00:21:52.805 "data_size": 63488 00:21:52.805 }, 00:21:52.806 { 00:21:52.806 "name": "pt3", 00:21:52.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:52.806 "is_configured": true, 00:21:52.806 "data_offset": 2048, 00:21:52.806 "data_size": 63488 00:21:52.806 }, 00:21:52.806 { 00:21:52.806 "name": "pt4", 00:21:52.806 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:52.806 "is_configured": true, 00:21:52.806 "data_offset": 2048, 00:21:52.806 "data_size": 63488 00:21:52.806 } 00:21:52.806 ] 00:21:52.806 } 00:21:52.806 } 00:21:52.806 }' 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:52.806 pt2 00:21:52.806 pt3 00:21:52.806 pt4' 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.806 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:53.065 [2024-11-27 04:43:40.568105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2017ace7-c19e-4983-b3bf-9bd762fa590d '!=' 2017ace7-c19e-4983-b3bf-9bd762fa590d ']' 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.065 [2024-11-27 04:43:40.623972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.065 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.065 "name": "raid_bdev1", 00:21:53.065 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:53.065 "strip_size_kb": 64, 00:21:53.065 "state": "online", 00:21:53.065 "raid_level": "raid5f", 00:21:53.065 "superblock": true, 00:21:53.065 "num_base_bdevs": 4, 00:21:53.065 "num_base_bdevs_discovered": 3, 00:21:53.065 "num_base_bdevs_operational": 3, 00:21:53.065 "base_bdevs_list": [ 00:21:53.065 { 00:21:53.065 "name": null, 00:21:53.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.065 "is_configured": false, 00:21:53.065 "data_offset": 0, 00:21:53.065 "data_size": 63488 00:21:53.065 }, 00:21:53.065 { 00:21:53.065 "name": "pt2", 00:21:53.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:53.065 "is_configured": true, 00:21:53.065 "data_offset": 2048, 00:21:53.065 "data_size": 63488 00:21:53.065 }, 00:21:53.065 { 00:21:53.065 "name": "pt3", 00:21:53.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:53.065 "is_configured": true, 00:21:53.065 "data_offset": 2048, 00:21:53.065 "data_size": 63488 00:21:53.065 }, 00:21:53.066 { 00:21:53.066 "name": "pt4", 00:21:53.066 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:53.066 "is_configured": true, 00:21:53.066 "data_offset": 2048, 00:21:53.066 "data_size": 63488 00:21:53.066 } 00:21:53.066 ] 00:21:53.066 }' 00:21:53.066 04:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.066 04:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.633 [2024-11-27 04:43:41.144099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:53.633 [2024-11-27 04:43:41.144138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:53.633 [2024-11-27 04:43:41.144241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:53.633 [2024-11-27 04:43:41.144346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:53.633 [2024-11-27 04:43:41.144363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.633 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.634 [2024-11-27 04:43:41.232082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:53.634 [2024-11-27 04:43:41.232148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.634 [2024-11-27 04:43:41.232178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:53.634 [2024-11-27 04:43:41.232192] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.634 [2024-11-27 04:43:41.235042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.634 [2024-11-27 04:43:41.235087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:53.634 [2024-11-27 04:43:41.235190] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:53.634 [2024-11-27 04:43:41.235249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:53.634 pt2 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.634 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.892 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.892 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.892 "name": "raid_bdev1", 00:21:53.892 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:53.892 "strip_size_kb": 64, 00:21:53.892 "state": "configuring", 00:21:53.892 "raid_level": "raid5f", 00:21:53.892 "superblock": true, 00:21:53.892 "num_base_bdevs": 4, 00:21:53.893 "num_base_bdevs_discovered": 1, 00:21:53.893 "num_base_bdevs_operational": 3, 00:21:53.893 "base_bdevs_list": [ 00:21:53.893 { 00:21:53.893 "name": null, 00:21:53.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.893 "is_configured": false, 00:21:53.893 "data_offset": 2048, 00:21:53.893 "data_size": 63488 00:21:53.893 }, 00:21:53.893 { 00:21:53.893 "name": "pt2", 00:21:53.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:53.893 "is_configured": true, 00:21:53.893 "data_offset": 2048, 00:21:53.893 "data_size": 63488 00:21:53.893 }, 00:21:53.893 { 00:21:53.893 "name": null, 00:21:53.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:53.893 "is_configured": false, 00:21:53.893 "data_offset": 2048, 00:21:53.893 "data_size": 63488 00:21:53.893 }, 00:21:53.893 { 00:21:53.893 "name": null, 00:21:53.893 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:53.893 "is_configured": false, 00:21:53.893 "data_offset": 2048, 00:21:53.893 "data_size": 63488 00:21:53.893 } 00:21:53.893 ] 00:21:53.893 }' 00:21:53.893 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.893 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.461 [2024-11-27 04:43:41.780269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:54.461 [2024-11-27 04:43:41.780503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.461 [2024-11-27 04:43:41.780584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:54.461 [2024-11-27 04:43:41.780846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.461 [2024-11-27 04:43:41.781413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.461 [2024-11-27 04:43:41.781450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:54.461 [2024-11-27 04:43:41.781560] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:54.461 [2024-11-27 04:43:41.781593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:54.461 pt3 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.461 "name": "raid_bdev1", 00:21:54.461 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:54.461 "strip_size_kb": 64, 00:21:54.461 "state": "configuring", 00:21:54.461 "raid_level": "raid5f", 00:21:54.461 "superblock": true, 00:21:54.461 "num_base_bdevs": 4, 00:21:54.461 "num_base_bdevs_discovered": 2, 00:21:54.461 "num_base_bdevs_operational": 3, 00:21:54.461 "base_bdevs_list": [ 00:21:54.461 { 00:21:54.461 "name": null, 00:21:54.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.461 "is_configured": false, 00:21:54.461 "data_offset": 2048, 00:21:54.461 "data_size": 63488 00:21:54.461 }, 00:21:54.461 { 00:21:54.461 "name": "pt2", 00:21:54.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:54.461 "is_configured": true, 00:21:54.461 "data_offset": 2048, 00:21:54.461 "data_size": 63488 00:21:54.461 }, 00:21:54.461 { 00:21:54.461 "name": "pt3", 00:21:54.461 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:54.461 "is_configured": true, 00:21:54.461 "data_offset": 2048, 00:21:54.461 "data_size": 63488 00:21:54.461 }, 00:21:54.461 { 00:21:54.461 "name": null, 00:21:54.461 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:54.461 "is_configured": false, 00:21:54.461 "data_offset": 2048, 00:21:54.461 "data_size": 63488 00:21:54.461 } 00:21:54.461 ] 00:21:54.461 }' 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.461 04:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.721 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:54.721 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:54.721 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:21:54.721 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:54.721 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.721 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.721 [2024-11-27 04:43:42.308425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:54.721 [2024-11-27 04:43:42.308635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.721 [2024-11-27 04:43:42.308714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:54.721 [2024-11-27 04:43:42.308984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.721 [2024-11-27 04:43:42.309563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.721 [2024-11-27 04:43:42.309600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:54.721 [2024-11-27 04:43:42.309707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:54.721 [2024-11-27 04:43:42.309747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:54.721 [2024-11-27 04:43:42.309934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:54.721 [2024-11-27 04:43:42.309957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:54.721 [2024-11-27 04:43:42.310282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:54.721 [2024-11-27 04:43:42.316756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:54.721 [2024-11-27 04:43:42.316920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:54.721 [2024-11-27 04:43:42.317408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.721 pt4 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.722 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.983 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.983 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.983 "name": "raid_bdev1", 00:21:54.983 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:54.983 "strip_size_kb": 64, 00:21:54.983 "state": "online", 00:21:54.983 "raid_level": "raid5f", 00:21:54.983 "superblock": true, 00:21:54.983 "num_base_bdevs": 4, 00:21:54.983 "num_base_bdevs_discovered": 3, 00:21:54.983 "num_base_bdevs_operational": 3, 00:21:54.983 "base_bdevs_list": [ 00:21:54.983 { 00:21:54.983 "name": null, 00:21:54.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.983 "is_configured": false, 00:21:54.983 "data_offset": 2048, 00:21:54.983 "data_size": 63488 00:21:54.983 }, 00:21:54.983 { 00:21:54.983 "name": "pt2", 00:21:54.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:54.983 "is_configured": true, 00:21:54.983 "data_offset": 2048, 00:21:54.983 "data_size": 63488 00:21:54.983 }, 00:21:54.983 { 00:21:54.983 "name": "pt3", 00:21:54.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:54.983 "is_configured": true, 00:21:54.983 "data_offset": 2048, 00:21:54.983 "data_size": 63488 00:21:54.983 }, 00:21:54.983 { 00:21:54.983 "name": "pt4", 00:21:54.983 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:54.983 "is_configured": true, 00:21:54.983 "data_offset": 2048, 00:21:54.983 "data_size": 63488 00:21:54.983 } 00:21:54.983 ] 00:21:54.983 }' 00:21:54.983 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.983 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.243 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:55.243 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.243 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.243 [2024-11-27 04:43:42.830044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.243 [2024-11-27 04:43:42.830082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.243 [2024-11-27 04:43:42.830196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.243 [2024-11-27 04:43:42.830322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.243 [2024-11-27 04:43:42.830349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:55.243 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.243 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.243 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:55.243 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.243 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.243 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.502 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.502 [2024-11-27 04:43:42.902030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:55.502 [2024-11-27 04:43:42.902244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.502 [2024-11-27 04:43:42.902321] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:21:55.502 [2024-11-27 04:43:42.902467] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.502 [2024-11-27 04:43:42.905471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.502 [2024-11-27 04:43:42.905639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:55.502 [2024-11-27 04:43:42.905879] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:55.502 [2024-11-27 04:43:42.906055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:55.502 [2024-11-27 04:43:42.906345] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:55.502 [2024-11-27 04:43:42.906498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.502 [2024-11-27 04:43:42.906614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:55.503 [2024-11-27 04:43:42.906828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:55.503 [2024-11-27 04:43:42.907146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:55.503 pt1 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.503 "name": "raid_bdev1", 00:21:55.503 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:55.503 "strip_size_kb": 64, 00:21:55.503 "state": "configuring", 00:21:55.503 "raid_level": "raid5f", 00:21:55.503 "superblock": true, 00:21:55.503 "num_base_bdevs": 4, 00:21:55.503 "num_base_bdevs_discovered": 2, 00:21:55.503 "num_base_bdevs_operational": 3, 00:21:55.503 "base_bdevs_list": [ 00:21:55.503 { 00:21:55.503 "name": null, 00:21:55.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.503 "is_configured": false, 00:21:55.503 "data_offset": 2048, 00:21:55.503 "data_size": 63488 00:21:55.503 }, 00:21:55.503 { 00:21:55.503 "name": "pt2", 00:21:55.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.503 "is_configured": true, 00:21:55.503 "data_offset": 2048, 00:21:55.503 "data_size": 63488 00:21:55.503 }, 00:21:55.503 { 00:21:55.503 "name": "pt3", 00:21:55.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:55.503 "is_configured": true, 00:21:55.503 "data_offset": 2048, 00:21:55.503 "data_size": 63488 00:21:55.503 }, 00:21:55.503 { 00:21:55.503 "name": null, 00:21:55.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:55.503 "is_configured": false, 00:21:55.503 "data_offset": 2048, 00:21:55.503 "data_size": 63488 00:21:55.503 } 00:21:55.503 ] 00:21:55.503 }' 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.503 04:43:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.071 [2024-11-27 04:43:43.494599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:56.071 [2024-11-27 04:43:43.494815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.071 [2024-11-27 04:43:43.494863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:56.071 [2024-11-27 04:43:43.494880] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.071 [2024-11-27 04:43:43.495426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.071 [2024-11-27 04:43:43.495452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:56.071 [2024-11-27 04:43:43.495556] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:56.071 [2024-11-27 04:43:43.495595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:56.071 [2024-11-27 04:43:43.495767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:56.071 [2024-11-27 04:43:43.495805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:56.071 [2024-11-27 04:43:43.496110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:56.071 [2024-11-27 04:43:43.502554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:56.071 pt4 00:21:56.071 [2024-11-27 04:43:43.502706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:56.071 [2024-11-27 04:43:43.503046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.071 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.072 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.072 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.072 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.072 "name": "raid_bdev1", 00:21:56.072 "uuid": "2017ace7-c19e-4983-b3bf-9bd762fa590d", 00:21:56.072 "strip_size_kb": 64, 00:21:56.072 "state": "online", 00:21:56.072 "raid_level": "raid5f", 00:21:56.072 "superblock": true, 00:21:56.072 "num_base_bdevs": 4, 00:21:56.072 "num_base_bdevs_discovered": 3, 00:21:56.072 "num_base_bdevs_operational": 3, 00:21:56.072 "base_bdevs_list": [ 00:21:56.072 { 00:21:56.072 "name": null, 00:21:56.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.072 "is_configured": false, 00:21:56.072 "data_offset": 2048, 00:21:56.072 "data_size": 63488 00:21:56.072 }, 00:21:56.072 { 00:21:56.072 "name": "pt2", 00:21:56.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.072 "is_configured": true, 00:21:56.072 "data_offset": 2048, 00:21:56.072 "data_size": 63488 00:21:56.072 }, 00:21:56.072 { 00:21:56.072 "name": "pt3", 00:21:56.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:56.072 "is_configured": true, 00:21:56.072 "data_offset": 2048, 00:21:56.072 "data_size": 63488 00:21:56.072 }, 00:21:56.072 { 00:21:56.072 "name": "pt4", 00:21:56.072 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:56.072 "is_configured": true, 00:21:56.072 "data_offset": 2048, 00:21:56.072 "data_size": 63488 00:21:56.072 } 00:21:56.072 ] 00:21:56.072 }' 00:21:56.072 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.072 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.639 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:56.639 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.639 04:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.639 04:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.639 [2024-11-27 04:43:44.046681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2017ace7-c19e-4983-b3bf-9bd762fa590d '!=' 2017ace7-c19e-4983-b3bf-9bd762fa590d ']' 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84669 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84669 ']' 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84669 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84669 00:21:56.639 killing process with pid 84669 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84669' 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84669 00:21:56.639 [2024-11-27 04:43:44.125431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:56.639 04:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84669 00:21:56.639 [2024-11-27 04:43:44.125536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.639 [2024-11-27 04:43:44.125635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:56.639 [2024-11-27 04:43:44.125658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:56.898 [2024-11-27 04:43:44.474300] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:58.273 04:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:58.273 ************************************ 00:21:58.273 END TEST raid5f_superblock_test 00:21:58.273 ************************************ 00:21:58.273 00:21:58.273 real 0m9.530s 00:21:58.273 user 0m15.728s 00:21:58.273 sys 0m1.364s 00:21:58.273 04:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.273 04:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.273 04:43:45 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:21:58.273 04:43:45 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:21:58.273 04:43:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:58.273 04:43:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.273 04:43:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:58.273 ************************************ 00:21:58.273 START TEST raid5f_rebuild_test 00:21:58.273 ************************************ 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85160 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85160 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85160 ']' 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.273 04:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.273 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:58.273 Zero copy mechanism will not be used. 00:21:58.273 [2024-11-27 04:43:45.680716] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:21:58.274 [2024-11-27 04:43:45.680911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85160 ] 00:21:58.274 [2024-11-27 04:43:45.862602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.573 [2024-11-27 04:43:45.990234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.573 [2024-11-27 04:43:46.191113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:58.573 [2024-11-27 04:43:46.191180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.140 BaseBdev1_malloc 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.140 [2024-11-27 04:43:46.663342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:59.140 [2024-11-27 04:43:46.663554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.140 [2024-11-27 04:43:46.663631] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:59.140 [2024-11-27 04:43:46.663767] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.140 [2024-11-27 04:43:46.666533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.140 BaseBdev1 00:21:59.140 [2024-11-27 04:43:46.666702] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.140 BaseBdev2_malloc 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.140 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.140 [2024-11-27 04:43:46.715082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:59.140 [2024-11-27 04:43:46.715289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.140 [2024-11-27 04:43:46.715366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:59.140 [2024-11-27 04:43:46.715536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.140 [2024-11-27 04:43:46.718304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.140 [2024-11-27 04:43:46.718464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:59.140 BaseBdev2 00:21:59.141 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.141 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:59.141 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:59.141 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.141 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.399 BaseBdev3_malloc 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.399 [2024-11-27 04:43:46.781488] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:59.399 [2024-11-27 04:43:46.781561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.399 [2024-11-27 04:43:46.781594] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:59.399 [2024-11-27 04:43:46.781614] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.399 [2024-11-27 04:43:46.784385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.399 [2024-11-27 04:43:46.784436] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:59.399 BaseBdev3 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.399 BaseBdev4_malloc 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.399 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.399 [2024-11-27 04:43:46.833196] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:59.399 [2024-11-27 04:43:46.833274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.399 [2024-11-27 04:43:46.833306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:59.399 [2024-11-27 04:43:46.833325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.399 [2024-11-27 04:43:46.836046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.400 [2024-11-27 04:43:46.836101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:59.400 BaseBdev4 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.400 spare_malloc 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.400 spare_delay 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.400 [2024-11-27 04:43:46.892864] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:59.400 [2024-11-27 04:43:46.893427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.400 [2024-11-27 04:43:46.893501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:59.400 [2024-11-27 04:43:46.893617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.400 [2024-11-27 04:43:46.896396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.400 [2024-11-27 04:43:46.896449] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:59.400 spare 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.400 [2024-11-27 04:43:46.901104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:59.400 [2024-11-27 04:43:46.903610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:59.400 [2024-11-27 04:43:46.903832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:59.400 [2024-11-27 04:43:46.903967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:59.400 [2024-11-27 04:43:46.904177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:59.400 [2024-11-27 04:43:46.904291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:59.400 [2024-11-27 04:43:46.904624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:59.400 [2024-11-27 04:43:46.911312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:59.400 [2024-11-27 04:43:46.911445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:59.400 [2024-11-27 04:43:46.911714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.400 "name": "raid_bdev1", 00:21:59.400 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:21:59.400 "strip_size_kb": 64, 00:21:59.400 "state": "online", 00:21:59.400 "raid_level": "raid5f", 00:21:59.400 "superblock": false, 00:21:59.400 "num_base_bdevs": 4, 00:21:59.400 "num_base_bdevs_discovered": 4, 00:21:59.400 "num_base_bdevs_operational": 4, 00:21:59.400 "base_bdevs_list": [ 00:21:59.400 { 00:21:59.400 "name": "BaseBdev1", 00:21:59.400 "uuid": "23c9f50d-4f3e-5958-ac49-54c52f2e4605", 00:21:59.400 "is_configured": true, 00:21:59.400 "data_offset": 0, 00:21:59.400 "data_size": 65536 00:21:59.400 }, 00:21:59.400 { 00:21:59.400 "name": "BaseBdev2", 00:21:59.400 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:21:59.400 "is_configured": true, 00:21:59.400 "data_offset": 0, 00:21:59.400 "data_size": 65536 00:21:59.400 }, 00:21:59.400 { 00:21:59.400 "name": "BaseBdev3", 00:21:59.400 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:21:59.400 "is_configured": true, 00:21:59.400 "data_offset": 0, 00:21:59.400 "data_size": 65536 00:21:59.400 }, 00:21:59.400 { 00:21:59.400 "name": "BaseBdev4", 00:21:59.400 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:21:59.400 "is_configured": true, 00:21:59.400 "data_offset": 0, 00:21:59.400 "data_size": 65536 00:21:59.400 } 00:21:59.400 ] 00:21:59.400 }' 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.400 04:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:59.968 [2024-11-27 04:43:47.415384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:59.968 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:00.226 [2024-11-27 04:43:47.795271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:00.226 /dev/nbd0 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:00.226 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.226 1+0 records in 00:22:00.226 1+0 records out 00:22:00.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330818 s, 12.4 MB/s 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:22:00.485 04:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:22:01.051 512+0 records in 00:22:01.051 512+0 records out 00:22:01.051 100663296 bytes (101 MB, 96 MiB) copied, 0.614958 s, 164 MB/s 00:22:01.051 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:01.051 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:01.051 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:01.051 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:01.051 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:01.051 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:01.051 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:01.309 [2024-11-27 04:43:48.736976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.309 [2024-11-27 04:43:48.748506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.309 "name": "raid_bdev1", 00:22:01.309 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:01.309 "strip_size_kb": 64, 00:22:01.309 "state": "online", 00:22:01.309 "raid_level": "raid5f", 00:22:01.309 "superblock": false, 00:22:01.309 "num_base_bdevs": 4, 00:22:01.309 "num_base_bdevs_discovered": 3, 00:22:01.309 "num_base_bdevs_operational": 3, 00:22:01.309 "base_bdevs_list": [ 00:22:01.309 { 00:22:01.309 "name": null, 00:22:01.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.309 "is_configured": false, 00:22:01.309 "data_offset": 0, 00:22:01.309 "data_size": 65536 00:22:01.309 }, 00:22:01.309 { 00:22:01.309 "name": "BaseBdev2", 00:22:01.309 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:01.309 "is_configured": true, 00:22:01.309 "data_offset": 0, 00:22:01.309 "data_size": 65536 00:22:01.309 }, 00:22:01.309 { 00:22:01.309 "name": "BaseBdev3", 00:22:01.309 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:01.309 "is_configured": true, 00:22:01.309 "data_offset": 0, 00:22:01.309 "data_size": 65536 00:22:01.309 }, 00:22:01.309 { 00:22:01.309 "name": "BaseBdev4", 00:22:01.309 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:01.309 "is_configured": true, 00:22:01.309 "data_offset": 0, 00:22:01.309 "data_size": 65536 00:22:01.309 } 00:22:01.309 ] 00:22:01.309 }' 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.309 04:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.875 04:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:01.875 04:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.875 04:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.875 [2024-11-27 04:43:49.228640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:01.875 [2024-11-27 04:43:49.242727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:22:01.875 04:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.875 04:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:01.875 [2024-11-27 04:43:49.251650] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:02.810 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.810 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:02.810 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:02.810 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:02.810 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:02.810 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:02.811 "name": "raid_bdev1", 00:22:02.811 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:02.811 "strip_size_kb": 64, 00:22:02.811 "state": "online", 00:22:02.811 "raid_level": "raid5f", 00:22:02.811 "superblock": false, 00:22:02.811 "num_base_bdevs": 4, 00:22:02.811 "num_base_bdevs_discovered": 4, 00:22:02.811 "num_base_bdevs_operational": 4, 00:22:02.811 "process": { 00:22:02.811 "type": "rebuild", 00:22:02.811 "target": "spare", 00:22:02.811 "progress": { 00:22:02.811 "blocks": 17280, 00:22:02.811 "percent": 8 00:22:02.811 } 00:22:02.811 }, 00:22:02.811 "base_bdevs_list": [ 00:22:02.811 { 00:22:02.811 "name": "spare", 00:22:02.811 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:02.811 "is_configured": true, 00:22:02.811 "data_offset": 0, 00:22:02.811 "data_size": 65536 00:22:02.811 }, 00:22:02.811 { 00:22:02.811 "name": "BaseBdev2", 00:22:02.811 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:02.811 "is_configured": true, 00:22:02.811 "data_offset": 0, 00:22:02.811 "data_size": 65536 00:22:02.811 }, 00:22:02.811 { 00:22:02.811 "name": "BaseBdev3", 00:22:02.811 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:02.811 "is_configured": true, 00:22:02.811 "data_offset": 0, 00:22:02.811 "data_size": 65536 00:22:02.811 }, 00:22:02.811 { 00:22:02.811 "name": "BaseBdev4", 00:22:02.811 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:02.811 "is_configured": true, 00:22:02.811 "data_offset": 0, 00:22:02.811 "data_size": 65536 00:22:02.811 } 00:22:02.811 ] 00:22:02.811 }' 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.811 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.811 [2024-11-27 04:43:50.405442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.070 [2024-11-27 04:43:50.463423] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:03.070 [2024-11-27 04:43:50.463733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.070 [2024-11-27 04:43:50.464030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.070 [2024-11-27 04:43:50.464158] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.070 "name": "raid_bdev1", 00:22:03.070 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:03.070 "strip_size_kb": 64, 00:22:03.070 "state": "online", 00:22:03.070 "raid_level": "raid5f", 00:22:03.070 "superblock": false, 00:22:03.070 "num_base_bdevs": 4, 00:22:03.070 "num_base_bdevs_discovered": 3, 00:22:03.070 "num_base_bdevs_operational": 3, 00:22:03.070 "base_bdevs_list": [ 00:22:03.070 { 00:22:03.070 "name": null, 00:22:03.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.070 "is_configured": false, 00:22:03.070 "data_offset": 0, 00:22:03.070 "data_size": 65536 00:22:03.070 }, 00:22:03.070 { 00:22:03.070 "name": "BaseBdev2", 00:22:03.070 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:03.070 "is_configured": true, 00:22:03.070 "data_offset": 0, 00:22:03.070 "data_size": 65536 00:22:03.070 }, 00:22:03.070 { 00:22:03.070 "name": "BaseBdev3", 00:22:03.070 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:03.070 "is_configured": true, 00:22:03.070 "data_offset": 0, 00:22:03.070 "data_size": 65536 00:22:03.070 }, 00:22:03.070 { 00:22:03.070 "name": "BaseBdev4", 00:22:03.070 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:03.070 "is_configured": true, 00:22:03.070 "data_offset": 0, 00:22:03.070 "data_size": 65536 00:22:03.070 } 00:22:03.070 ] 00:22:03.070 }' 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.070 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.638 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.638 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.638 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:03.638 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:03.638 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.638 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.638 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.638 04:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.638 04:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.638 04:43:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.638 04:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.638 "name": "raid_bdev1", 00:22:03.638 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:03.638 "strip_size_kb": 64, 00:22:03.638 "state": "online", 00:22:03.638 "raid_level": "raid5f", 00:22:03.638 "superblock": false, 00:22:03.638 "num_base_bdevs": 4, 00:22:03.638 "num_base_bdevs_discovered": 3, 00:22:03.638 "num_base_bdevs_operational": 3, 00:22:03.638 "base_bdevs_list": [ 00:22:03.638 { 00:22:03.638 "name": null, 00:22:03.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.638 "is_configured": false, 00:22:03.638 "data_offset": 0, 00:22:03.638 "data_size": 65536 00:22:03.638 }, 00:22:03.638 { 00:22:03.638 "name": "BaseBdev2", 00:22:03.638 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:03.638 "is_configured": true, 00:22:03.638 "data_offset": 0, 00:22:03.638 "data_size": 65536 00:22:03.638 }, 00:22:03.638 { 00:22:03.638 "name": "BaseBdev3", 00:22:03.638 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:03.638 "is_configured": true, 00:22:03.638 "data_offset": 0, 00:22:03.639 "data_size": 65536 00:22:03.639 }, 00:22:03.639 { 00:22:03.639 "name": "BaseBdev4", 00:22:03.639 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:03.639 "is_configured": true, 00:22:03.639 "data_offset": 0, 00:22:03.639 "data_size": 65536 00:22:03.639 } 00:22:03.639 ] 00:22:03.639 }' 00:22:03.639 04:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.639 04:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:03.639 04:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.639 04:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:03.639 04:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:03.639 04:43:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.639 04:43:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.639 [2024-11-27 04:43:51.147145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:03.639 [2024-11-27 04:43:51.160478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:22:03.639 04:43:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.639 04:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:03.639 [2024-11-27 04:43:51.169295] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:04.573 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.573 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:04.573 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:04.573 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:04.573 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:04.573 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.573 04:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.573 04:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.573 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.573 04:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:04.832 "name": "raid_bdev1", 00:22:04.832 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:04.832 "strip_size_kb": 64, 00:22:04.832 "state": "online", 00:22:04.832 "raid_level": "raid5f", 00:22:04.832 "superblock": false, 00:22:04.832 "num_base_bdevs": 4, 00:22:04.832 "num_base_bdevs_discovered": 4, 00:22:04.832 "num_base_bdevs_operational": 4, 00:22:04.832 "process": { 00:22:04.832 "type": "rebuild", 00:22:04.832 "target": "spare", 00:22:04.832 "progress": { 00:22:04.832 "blocks": 17280, 00:22:04.832 "percent": 8 00:22:04.832 } 00:22:04.832 }, 00:22:04.832 "base_bdevs_list": [ 00:22:04.832 { 00:22:04.832 "name": "spare", 00:22:04.832 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:04.832 "is_configured": true, 00:22:04.832 "data_offset": 0, 00:22:04.832 "data_size": 65536 00:22:04.832 }, 00:22:04.832 { 00:22:04.832 "name": "BaseBdev2", 00:22:04.832 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:04.832 "is_configured": true, 00:22:04.832 "data_offset": 0, 00:22:04.832 "data_size": 65536 00:22:04.832 }, 00:22:04.832 { 00:22:04.832 "name": "BaseBdev3", 00:22:04.832 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:04.832 "is_configured": true, 00:22:04.832 "data_offset": 0, 00:22:04.832 "data_size": 65536 00:22:04.832 }, 00:22:04.832 { 00:22:04.832 "name": "BaseBdev4", 00:22:04.832 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:04.832 "is_configured": true, 00:22:04.832 "data_offset": 0, 00:22:04.832 "data_size": 65536 00:22:04.832 } 00:22:04.832 ] 00:22:04.832 }' 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=673 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.832 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:04.832 "name": "raid_bdev1", 00:22:04.832 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:04.832 "strip_size_kb": 64, 00:22:04.832 "state": "online", 00:22:04.832 "raid_level": "raid5f", 00:22:04.832 "superblock": false, 00:22:04.832 "num_base_bdevs": 4, 00:22:04.832 "num_base_bdevs_discovered": 4, 00:22:04.832 "num_base_bdevs_operational": 4, 00:22:04.832 "process": { 00:22:04.832 "type": "rebuild", 00:22:04.832 "target": "spare", 00:22:04.832 "progress": { 00:22:04.832 "blocks": 21120, 00:22:04.832 "percent": 10 00:22:04.833 } 00:22:04.833 }, 00:22:04.833 "base_bdevs_list": [ 00:22:04.833 { 00:22:04.833 "name": "spare", 00:22:04.833 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:04.833 "is_configured": true, 00:22:04.833 "data_offset": 0, 00:22:04.833 "data_size": 65536 00:22:04.833 }, 00:22:04.833 { 00:22:04.833 "name": "BaseBdev2", 00:22:04.833 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:04.833 "is_configured": true, 00:22:04.833 "data_offset": 0, 00:22:04.833 "data_size": 65536 00:22:04.833 }, 00:22:04.833 { 00:22:04.833 "name": "BaseBdev3", 00:22:04.833 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:04.833 "is_configured": true, 00:22:04.833 "data_offset": 0, 00:22:04.833 "data_size": 65536 00:22:04.833 }, 00:22:04.833 { 00:22:04.833 "name": "BaseBdev4", 00:22:04.833 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:04.833 "is_configured": true, 00:22:04.833 "data_offset": 0, 00:22:04.833 "data_size": 65536 00:22:04.833 } 00:22:04.833 ] 00:22:04.833 }' 00:22:04.833 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:04.833 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:04.833 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.091 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.091 04:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:06.027 "name": "raid_bdev1", 00:22:06.027 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:06.027 "strip_size_kb": 64, 00:22:06.027 "state": "online", 00:22:06.027 "raid_level": "raid5f", 00:22:06.027 "superblock": false, 00:22:06.027 "num_base_bdevs": 4, 00:22:06.027 "num_base_bdevs_discovered": 4, 00:22:06.027 "num_base_bdevs_operational": 4, 00:22:06.027 "process": { 00:22:06.027 "type": "rebuild", 00:22:06.027 "target": "spare", 00:22:06.027 "progress": { 00:22:06.027 "blocks": 44160, 00:22:06.027 "percent": 22 00:22:06.027 } 00:22:06.027 }, 00:22:06.027 "base_bdevs_list": [ 00:22:06.027 { 00:22:06.027 "name": "spare", 00:22:06.027 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:06.027 "is_configured": true, 00:22:06.027 "data_offset": 0, 00:22:06.027 "data_size": 65536 00:22:06.027 }, 00:22:06.027 { 00:22:06.027 "name": "BaseBdev2", 00:22:06.027 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:06.027 "is_configured": true, 00:22:06.027 "data_offset": 0, 00:22:06.027 "data_size": 65536 00:22:06.027 }, 00:22:06.027 { 00:22:06.027 "name": "BaseBdev3", 00:22:06.027 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:06.027 "is_configured": true, 00:22:06.027 "data_offset": 0, 00:22:06.027 "data_size": 65536 00:22:06.027 }, 00:22:06.027 { 00:22:06.027 "name": "BaseBdev4", 00:22:06.027 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:06.027 "is_configured": true, 00:22:06.027 "data_offset": 0, 00:22:06.027 "data_size": 65536 00:22:06.027 } 00:22:06.027 ] 00:22:06.027 }' 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:06.027 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:06.285 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.285 04:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:07.219 "name": "raid_bdev1", 00:22:07.219 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:07.219 "strip_size_kb": 64, 00:22:07.219 "state": "online", 00:22:07.219 "raid_level": "raid5f", 00:22:07.219 "superblock": false, 00:22:07.219 "num_base_bdevs": 4, 00:22:07.219 "num_base_bdevs_discovered": 4, 00:22:07.219 "num_base_bdevs_operational": 4, 00:22:07.219 "process": { 00:22:07.219 "type": "rebuild", 00:22:07.219 "target": "spare", 00:22:07.219 "progress": { 00:22:07.219 "blocks": 65280, 00:22:07.219 "percent": 33 00:22:07.219 } 00:22:07.219 }, 00:22:07.219 "base_bdevs_list": [ 00:22:07.219 { 00:22:07.219 "name": "spare", 00:22:07.219 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:07.219 "is_configured": true, 00:22:07.219 "data_offset": 0, 00:22:07.219 "data_size": 65536 00:22:07.219 }, 00:22:07.219 { 00:22:07.219 "name": "BaseBdev2", 00:22:07.219 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:07.219 "is_configured": true, 00:22:07.219 "data_offset": 0, 00:22:07.219 "data_size": 65536 00:22:07.219 }, 00:22:07.219 { 00:22:07.219 "name": "BaseBdev3", 00:22:07.219 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:07.219 "is_configured": true, 00:22:07.219 "data_offset": 0, 00:22:07.219 "data_size": 65536 00:22:07.219 }, 00:22:07.219 { 00:22:07.219 "name": "BaseBdev4", 00:22:07.219 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:07.219 "is_configured": true, 00:22:07.219 "data_offset": 0, 00:22:07.219 "data_size": 65536 00:22:07.219 } 00:22:07.219 ] 00:22:07.219 }' 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:07.219 04:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:08.592 "name": "raid_bdev1", 00:22:08.592 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:08.592 "strip_size_kb": 64, 00:22:08.592 "state": "online", 00:22:08.592 "raid_level": "raid5f", 00:22:08.592 "superblock": false, 00:22:08.592 "num_base_bdevs": 4, 00:22:08.592 "num_base_bdevs_discovered": 4, 00:22:08.592 "num_base_bdevs_operational": 4, 00:22:08.592 "process": { 00:22:08.592 "type": "rebuild", 00:22:08.592 "target": "spare", 00:22:08.592 "progress": { 00:22:08.592 "blocks": 88320, 00:22:08.592 "percent": 44 00:22:08.592 } 00:22:08.592 }, 00:22:08.592 "base_bdevs_list": [ 00:22:08.592 { 00:22:08.592 "name": "spare", 00:22:08.592 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:08.592 "is_configured": true, 00:22:08.592 "data_offset": 0, 00:22:08.592 "data_size": 65536 00:22:08.592 }, 00:22:08.592 { 00:22:08.592 "name": "BaseBdev2", 00:22:08.592 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:08.592 "is_configured": true, 00:22:08.592 "data_offset": 0, 00:22:08.592 "data_size": 65536 00:22:08.592 }, 00:22:08.592 { 00:22:08.592 "name": "BaseBdev3", 00:22:08.592 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:08.592 "is_configured": true, 00:22:08.592 "data_offset": 0, 00:22:08.592 "data_size": 65536 00:22:08.592 }, 00:22:08.592 { 00:22:08.592 "name": "BaseBdev4", 00:22:08.592 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:08.592 "is_configured": true, 00:22:08.592 "data_offset": 0, 00:22:08.592 "data_size": 65536 00:22:08.592 } 00:22:08.592 ] 00:22:08.592 }' 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.592 04:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:09.526 04:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:09.526 04:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:09.526 04:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:09.526 04:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:09.526 04:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:09.526 04:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:09.526 04:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.526 04:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.526 04:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.526 04:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.526 04:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.526 04:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:09.526 "name": "raid_bdev1", 00:22:09.526 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:09.526 "strip_size_kb": 64, 00:22:09.526 "state": "online", 00:22:09.526 "raid_level": "raid5f", 00:22:09.526 "superblock": false, 00:22:09.526 "num_base_bdevs": 4, 00:22:09.526 "num_base_bdevs_discovered": 4, 00:22:09.526 "num_base_bdevs_operational": 4, 00:22:09.526 "process": { 00:22:09.526 "type": "rebuild", 00:22:09.526 "target": "spare", 00:22:09.526 "progress": { 00:22:09.526 "blocks": 109440, 00:22:09.526 "percent": 55 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 "base_bdevs_list": [ 00:22:09.526 { 00:22:09.526 "name": "spare", 00:22:09.526 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:09.526 "is_configured": true, 00:22:09.526 "data_offset": 0, 00:22:09.526 "data_size": 65536 00:22:09.526 }, 00:22:09.526 { 00:22:09.527 "name": "BaseBdev2", 00:22:09.527 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:09.527 "is_configured": true, 00:22:09.527 "data_offset": 0, 00:22:09.527 "data_size": 65536 00:22:09.527 }, 00:22:09.527 { 00:22:09.527 "name": "BaseBdev3", 00:22:09.527 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:09.527 "is_configured": true, 00:22:09.527 "data_offset": 0, 00:22:09.527 "data_size": 65536 00:22:09.527 }, 00:22:09.527 { 00:22:09.527 "name": "BaseBdev4", 00:22:09.527 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:09.527 "is_configured": true, 00:22:09.527 "data_offset": 0, 00:22:09.527 "data_size": 65536 00:22:09.527 } 00:22:09.527 ] 00:22:09.527 }' 00:22:09.527 04:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:09.527 04:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:09.527 04:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:09.812 04:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:09.812 04:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:10.797 "name": "raid_bdev1", 00:22:10.797 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:10.797 "strip_size_kb": 64, 00:22:10.797 "state": "online", 00:22:10.797 "raid_level": "raid5f", 00:22:10.797 "superblock": false, 00:22:10.797 "num_base_bdevs": 4, 00:22:10.797 "num_base_bdevs_discovered": 4, 00:22:10.797 "num_base_bdevs_operational": 4, 00:22:10.797 "process": { 00:22:10.797 "type": "rebuild", 00:22:10.797 "target": "spare", 00:22:10.797 "progress": { 00:22:10.797 "blocks": 132480, 00:22:10.797 "percent": 67 00:22:10.797 } 00:22:10.797 }, 00:22:10.797 "base_bdevs_list": [ 00:22:10.797 { 00:22:10.797 "name": "spare", 00:22:10.797 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:10.797 "is_configured": true, 00:22:10.797 "data_offset": 0, 00:22:10.797 "data_size": 65536 00:22:10.797 }, 00:22:10.797 { 00:22:10.797 "name": "BaseBdev2", 00:22:10.797 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:10.797 "is_configured": true, 00:22:10.797 "data_offset": 0, 00:22:10.797 "data_size": 65536 00:22:10.797 }, 00:22:10.797 { 00:22:10.797 "name": "BaseBdev3", 00:22:10.797 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:10.797 "is_configured": true, 00:22:10.797 "data_offset": 0, 00:22:10.797 "data_size": 65536 00:22:10.797 }, 00:22:10.797 { 00:22:10.797 "name": "BaseBdev4", 00:22:10.797 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:10.797 "is_configured": true, 00:22:10.797 "data_offset": 0, 00:22:10.797 "data_size": 65536 00:22:10.797 } 00:22:10.797 ] 00:22:10.797 }' 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.797 04:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.738 04:43:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.996 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:11.996 "name": "raid_bdev1", 00:22:11.996 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:11.996 "strip_size_kb": 64, 00:22:11.996 "state": "online", 00:22:11.996 "raid_level": "raid5f", 00:22:11.996 "superblock": false, 00:22:11.996 "num_base_bdevs": 4, 00:22:11.996 "num_base_bdevs_discovered": 4, 00:22:11.996 "num_base_bdevs_operational": 4, 00:22:11.996 "process": { 00:22:11.996 "type": "rebuild", 00:22:11.996 "target": "spare", 00:22:11.996 "progress": { 00:22:11.996 "blocks": 153600, 00:22:11.996 "percent": 78 00:22:11.996 } 00:22:11.996 }, 00:22:11.996 "base_bdevs_list": [ 00:22:11.996 { 00:22:11.996 "name": "spare", 00:22:11.996 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:11.996 "is_configured": true, 00:22:11.996 "data_offset": 0, 00:22:11.996 "data_size": 65536 00:22:11.996 }, 00:22:11.996 { 00:22:11.996 "name": "BaseBdev2", 00:22:11.996 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:11.996 "is_configured": true, 00:22:11.996 "data_offset": 0, 00:22:11.996 "data_size": 65536 00:22:11.996 }, 00:22:11.996 { 00:22:11.996 "name": "BaseBdev3", 00:22:11.996 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:11.996 "is_configured": true, 00:22:11.996 "data_offset": 0, 00:22:11.996 "data_size": 65536 00:22:11.996 }, 00:22:11.996 { 00:22:11.996 "name": "BaseBdev4", 00:22:11.996 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:11.996 "is_configured": true, 00:22:11.996 "data_offset": 0, 00:22:11.996 "data_size": 65536 00:22:11.996 } 00:22:11.996 ] 00:22:11.996 }' 00:22:11.996 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:11.996 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.996 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:11.996 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.996 04:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.931 "name": "raid_bdev1", 00:22:12.931 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:12.931 "strip_size_kb": 64, 00:22:12.931 "state": "online", 00:22:12.931 "raid_level": "raid5f", 00:22:12.931 "superblock": false, 00:22:12.931 "num_base_bdevs": 4, 00:22:12.931 "num_base_bdevs_discovered": 4, 00:22:12.931 "num_base_bdevs_operational": 4, 00:22:12.931 "process": { 00:22:12.931 "type": "rebuild", 00:22:12.931 "target": "spare", 00:22:12.931 "progress": { 00:22:12.931 "blocks": 176640, 00:22:12.931 "percent": 89 00:22:12.931 } 00:22:12.931 }, 00:22:12.931 "base_bdevs_list": [ 00:22:12.931 { 00:22:12.931 "name": "spare", 00:22:12.931 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:12.931 "is_configured": true, 00:22:12.931 "data_offset": 0, 00:22:12.931 "data_size": 65536 00:22:12.931 }, 00:22:12.931 { 00:22:12.931 "name": "BaseBdev2", 00:22:12.931 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:12.931 "is_configured": true, 00:22:12.931 "data_offset": 0, 00:22:12.931 "data_size": 65536 00:22:12.931 }, 00:22:12.931 { 00:22:12.931 "name": "BaseBdev3", 00:22:12.931 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:12.931 "is_configured": true, 00:22:12.931 "data_offset": 0, 00:22:12.931 "data_size": 65536 00:22:12.931 }, 00:22:12.931 { 00:22:12.931 "name": "BaseBdev4", 00:22:12.931 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:12.931 "is_configured": true, 00:22:12.931 "data_offset": 0, 00:22:12.931 "data_size": 65536 00:22:12.931 } 00:22:12.931 ] 00:22:12.931 }' 00:22:12.931 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:13.189 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.189 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:13.189 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.189 04:44:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:14.125 [2024-11-27 04:44:01.570499] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:14.125 [2024-11-27 04:44:01.570614] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:14.125 [2024-11-27 04:44:01.570683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.125 "name": "raid_bdev1", 00:22:14.125 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:14.125 "strip_size_kb": 64, 00:22:14.125 "state": "online", 00:22:14.125 "raid_level": "raid5f", 00:22:14.125 "superblock": false, 00:22:14.125 "num_base_bdevs": 4, 00:22:14.125 "num_base_bdevs_discovered": 4, 00:22:14.125 "num_base_bdevs_operational": 4, 00:22:14.125 "base_bdevs_list": [ 00:22:14.125 { 00:22:14.125 "name": "spare", 00:22:14.125 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:14.125 "is_configured": true, 00:22:14.125 "data_offset": 0, 00:22:14.125 "data_size": 65536 00:22:14.125 }, 00:22:14.125 { 00:22:14.125 "name": "BaseBdev2", 00:22:14.125 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:14.125 "is_configured": true, 00:22:14.125 "data_offset": 0, 00:22:14.125 "data_size": 65536 00:22:14.125 }, 00:22:14.125 { 00:22:14.125 "name": "BaseBdev3", 00:22:14.125 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:14.125 "is_configured": true, 00:22:14.125 "data_offset": 0, 00:22:14.125 "data_size": 65536 00:22:14.125 }, 00:22:14.125 { 00:22:14.125 "name": "BaseBdev4", 00:22:14.125 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:14.125 "is_configured": true, 00:22:14.125 "data_offset": 0, 00:22:14.125 "data_size": 65536 00:22:14.125 } 00:22:14.125 ] 00:22:14.125 }' 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.125 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.390 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.390 "name": "raid_bdev1", 00:22:14.390 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:14.390 "strip_size_kb": 64, 00:22:14.390 "state": "online", 00:22:14.391 "raid_level": "raid5f", 00:22:14.391 "superblock": false, 00:22:14.391 "num_base_bdevs": 4, 00:22:14.391 "num_base_bdevs_discovered": 4, 00:22:14.391 "num_base_bdevs_operational": 4, 00:22:14.391 "base_bdevs_list": [ 00:22:14.391 { 00:22:14.391 "name": "spare", 00:22:14.391 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:14.391 "is_configured": true, 00:22:14.391 "data_offset": 0, 00:22:14.391 "data_size": 65536 00:22:14.391 }, 00:22:14.391 { 00:22:14.391 "name": "BaseBdev2", 00:22:14.391 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:14.391 "is_configured": true, 00:22:14.391 "data_offset": 0, 00:22:14.391 "data_size": 65536 00:22:14.391 }, 00:22:14.391 { 00:22:14.391 "name": "BaseBdev3", 00:22:14.391 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:14.391 "is_configured": true, 00:22:14.391 "data_offset": 0, 00:22:14.391 "data_size": 65536 00:22:14.391 }, 00:22:14.391 { 00:22:14.391 "name": "BaseBdev4", 00:22:14.391 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:14.391 "is_configured": true, 00:22:14.391 "data_offset": 0, 00:22:14.391 "data_size": 65536 00:22:14.391 } 00:22:14.391 ] 00:22:14.391 }' 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.391 04:44:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.391 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.391 "name": "raid_bdev1", 00:22:14.391 "uuid": "08fe71f4-665a-41d5-80cc-8f84a6045395", 00:22:14.391 "strip_size_kb": 64, 00:22:14.391 "state": "online", 00:22:14.391 "raid_level": "raid5f", 00:22:14.391 "superblock": false, 00:22:14.391 "num_base_bdevs": 4, 00:22:14.391 "num_base_bdevs_discovered": 4, 00:22:14.391 "num_base_bdevs_operational": 4, 00:22:14.391 "base_bdevs_list": [ 00:22:14.391 { 00:22:14.391 "name": "spare", 00:22:14.391 "uuid": "aaebbaae-4466-5e3d-a370-7767f559017e", 00:22:14.391 "is_configured": true, 00:22:14.391 "data_offset": 0, 00:22:14.391 "data_size": 65536 00:22:14.391 }, 00:22:14.391 { 00:22:14.391 "name": "BaseBdev2", 00:22:14.391 "uuid": "c5154446-6381-5624-8e8e-5d70895be9b9", 00:22:14.391 "is_configured": true, 00:22:14.391 "data_offset": 0, 00:22:14.391 "data_size": 65536 00:22:14.391 }, 00:22:14.391 { 00:22:14.391 "name": "BaseBdev3", 00:22:14.391 "uuid": "acb9539f-37d6-5cde-b2b7-b8c3c077bc6f", 00:22:14.391 "is_configured": true, 00:22:14.391 "data_offset": 0, 00:22:14.391 "data_size": 65536 00:22:14.391 }, 00:22:14.391 { 00:22:14.391 "name": "BaseBdev4", 00:22:14.391 "uuid": "61d3f5c0-96ce-524f-95fe-0c7769d38fc5", 00:22:14.391 "is_configured": true, 00:22:14.391 "data_offset": 0, 00:22:14.391 "data_size": 65536 00:22:14.391 } 00:22:14.391 ] 00:22:14.391 }' 00:22:14.391 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.391 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.959 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:14.959 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.959 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.959 [2024-11-27 04:44:02.493807] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:14.960 [2024-11-27 04:44:02.493855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.960 [2024-11-27 04:44:02.493970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.960 [2024-11-27 04:44:02.494094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.960 [2024-11-27 04:44:02.494112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:14.960 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:15.526 /dev/nbd0 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:15.526 1+0 records in 00:22:15.526 1+0 records out 00:22:15.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327128 s, 12.5 MB/s 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:15.526 04:44:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:15.785 /dev/nbd1 00:22:15.785 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:15.785 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:15.785 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:15.785 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:15.785 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:15.785 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:15.785 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:15.785 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:15.785 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:15.785 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:15.786 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:15.786 1+0 records in 00:22:15.786 1+0 records out 00:22:15.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371839 s, 11.0 MB/s 00:22:15.786 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.786 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:15.786 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.786 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:15.786 04:44:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:15.786 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:15.786 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:15.786 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:16.044 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:16.044 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:16.044 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:16.044 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:16.044 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:16.044 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.044 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:16.302 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:16.302 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:16.302 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:16.302 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.302 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.302 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:16.302 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:16.302 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.302 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.302 04:44:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85160 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85160 ']' 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85160 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85160 00:22:16.561 killing process with pid 85160 00:22:16.561 Received shutdown signal, test time was about 60.000000 seconds 00:22:16.561 00:22:16.561 Latency(us) 00:22:16.561 [2024-11-27T04:44:04.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.561 [2024-11-27T04:44:04.184Z] =================================================================================================================== 00:22:16.561 [2024-11-27T04:44:04.184Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85160' 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85160 00:22:16.561 04:44:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85160 00:22:16.561 [2024-11-27 04:44:04.123112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:17.128 [2024-11-27 04:44:04.548452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:18.068 00:22:18.068 real 0m19.994s 00:22:18.068 user 0m24.969s 00:22:18.068 sys 0m2.139s 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.068 ************************************ 00:22:18.068 END TEST raid5f_rebuild_test 00:22:18.068 ************************************ 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.068 04:44:05 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:22:18.068 04:44:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:18.068 04:44:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.068 04:44:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:18.068 ************************************ 00:22:18.068 START TEST raid5f_rebuild_test_sb 00:22:18.068 ************************************ 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:18.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85670 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85670 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85670 ']' 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.068 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.069 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.069 04:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.326 [2024-11-27 04:44:05.720499] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:18.326 [2024-11-27 04:44:05.720889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85670 ] 00:22:18.326 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:18.326 Zero copy mechanism will not be used. 00:22:18.326 [2024-11-27 04:44:05.896666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.584 [2024-11-27 04:44:06.025230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.842 [2024-11-27 04:44:06.227389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:18.842 [2024-11-27 04:44:06.227445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:19.100 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.100 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:19.100 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:19.100 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:19.100 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.100 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.359 BaseBdev1_malloc 00:22:19.359 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.359 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.360 [2024-11-27 04:44:06.752415] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:19.360 [2024-11-27 04:44:06.752659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.360 [2024-11-27 04:44:06.752738] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:19.360 [2024-11-27 04:44:06.753012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.360 [2024-11-27 04:44:06.755963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.360 [2024-11-27 04:44:06.756014] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:19.360 BaseBdev1 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.360 BaseBdev2_malloc 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.360 [2024-11-27 04:44:06.804953] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:19.360 [2024-11-27 04:44:06.805030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.360 [2024-11-27 04:44:06.805062] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:19.360 [2024-11-27 04:44:06.805080] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.360 [2024-11-27 04:44:06.807864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.360 [2024-11-27 04:44:06.807912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:19.360 BaseBdev2 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.360 BaseBdev3_malloc 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.360 [2024-11-27 04:44:06.865835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:19.360 [2024-11-27 04:44:06.866029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.360 [2024-11-27 04:44:06.866071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:19.360 [2024-11-27 04:44:06.866090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.360 [2024-11-27 04:44:06.868908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.360 [2024-11-27 04:44:06.868958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:19.360 BaseBdev3 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.360 BaseBdev4_malloc 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.360 [2024-11-27 04:44:06.918692] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:19.360 [2024-11-27 04:44:06.918934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.360 [2024-11-27 04:44:06.919011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:19.360 [2024-11-27 04:44:06.919138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.360 [2024-11-27 04:44:06.922065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.360 [2024-11-27 04:44:06.922118] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:19.360 BaseBdev4 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.360 spare_malloc 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.360 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.619 spare_delay 00:22:19.619 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.619 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:19.619 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.619 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.619 [2024-11-27 04:44:06.987658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:19.619 [2024-11-27 04:44:06.987895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.619 [2024-11-27 04:44:06.987974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:19.619 [2024-11-27 04:44:06.988000] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.619 [2024-11-27 04:44:06.990957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.619 [2024-11-27 04:44:06.991009] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:19.619 spare 00:22:19.619 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.619 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:22:19.619 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.619 04:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.619 [2024-11-27 04:44:06.995752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:19.619 [2024-11-27 04:44:06.998461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:19.619 [2024-11-27 04:44:06.998674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:19.619 [2024-11-27 04:44:06.998822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:19.619 [2024-11-27 04:44:06.999164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:19.619 [2024-11-27 04:44:06.999230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:19.619 [2024-11-27 04:44:06.999675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:19.619 [2024-11-27 04:44:07.007025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:19.619 [2024-11-27 04:44:07.007187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:19.619 [2024-11-27 04:44:07.007574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.619 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.619 "name": "raid_bdev1", 00:22:19.619 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:19.619 "strip_size_kb": 64, 00:22:19.619 "state": "online", 00:22:19.619 "raid_level": "raid5f", 00:22:19.619 "superblock": true, 00:22:19.619 "num_base_bdevs": 4, 00:22:19.619 "num_base_bdevs_discovered": 4, 00:22:19.619 "num_base_bdevs_operational": 4, 00:22:19.619 "base_bdevs_list": [ 00:22:19.619 { 00:22:19.619 "name": "BaseBdev1", 00:22:19.619 "uuid": "5b479d34-6046-58ff-a8c4-631a7c07748f", 00:22:19.619 "is_configured": true, 00:22:19.619 "data_offset": 2048, 00:22:19.619 "data_size": 63488 00:22:19.619 }, 00:22:19.619 { 00:22:19.619 "name": "BaseBdev2", 00:22:19.619 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:19.619 "is_configured": true, 00:22:19.619 "data_offset": 2048, 00:22:19.619 "data_size": 63488 00:22:19.619 }, 00:22:19.619 { 00:22:19.619 "name": "BaseBdev3", 00:22:19.619 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:19.620 "is_configured": true, 00:22:19.620 "data_offset": 2048, 00:22:19.620 "data_size": 63488 00:22:19.620 }, 00:22:19.620 { 00:22:19.620 "name": "BaseBdev4", 00:22:19.620 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:19.620 "is_configured": true, 00:22:19.620 "data_offset": 2048, 00:22:19.620 "data_size": 63488 00:22:19.620 } 00:22:19.620 ] 00:22:19.620 }' 00:22:19.620 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.620 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.188 [2024-11-27 04:44:07.543405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:20.188 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:20.447 [2024-11-27 04:44:07.951347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:20.447 /dev/nbd0 00:22:20.447 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:20.447 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:20.447 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:20.447 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:20.447 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:20.447 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:20.447 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:20.447 04:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:20.447 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:20.447 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:20.447 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:20.447 1+0 records in 00:22:20.447 1+0 records out 00:22:20.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293556 s, 14.0 MB/s 00:22:20.447 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.447 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:20.447 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.447 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:20.447 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:20.447 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:20.448 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:20.448 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:20.448 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:22:20.448 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:22:20.448 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:22:21.017 496+0 records in 00:22:21.017 496+0 records out 00:22:21.017 97517568 bytes (98 MB, 93 MiB) copied, 0.607312 s, 161 MB/s 00:22:21.017 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:21.017 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:21.017 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:21.017 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:21.017 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:21.017 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.017 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:21.585 [2024-11-27 04:44:08.926204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.585 [2024-11-27 04:44:08.942034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.585 04:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.585 04:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.585 "name": "raid_bdev1", 00:22:21.585 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:21.585 "strip_size_kb": 64, 00:22:21.585 "state": "online", 00:22:21.585 "raid_level": "raid5f", 00:22:21.585 "superblock": true, 00:22:21.585 "num_base_bdevs": 4, 00:22:21.585 "num_base_bdevs_discovered": 3, 00:22:21.585 "num_base_bdevs_operational": 3, 00:22:21.585 "base_bdevs_list": [ 00:22:21.585 { 00:22:21.585 "name": null, 00:22:21.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.585 "is_configured": false, 00:22:21.585 "data_offset": 0, 00:22:21.585 "data_size": 63488 00:22:21.585 }, 00:22:21.585 { 00:22:21.585 "name": "BaseBdev2", 00:22:21.585 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:21.585 "is_configured": true, 00:22:21.585 "data_offset": 2048, 00:22:21.585 "data_size": 63488 00:22:21.585 }, 00:22:21.585 { 00:22:21.585 "name": "BaseBdev3", 00:22:21.585 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:21.585 "is_configured": true, 00:22:21.585 "data_offset": 2048, 00:22:21.585 "data_size": 63488 00:22:21.585 }, 00:22:21.585 { 00:22:21.585 "name": "BaseBdev4", 00:22:21.585 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:21.585 "is_configured": true, 00:22:21.585 "data_offset": 2048, 00:22:21.585 "data_size": 63488 00:22:21.585 } 00:22:21.585 ] 00:22:21.585 }' 00:22:21.585 04:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.585 04:44:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.843 04:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:21.843 04:44:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.843 04:44:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.843 [2024-11-27 04:44:09.462166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:22.102 [2024-11-27 04:44:09.476761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:22:22.102 04:44:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.102 04:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:22.102 [2024-11-27 04:44:09.485867] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.051 "name": "raid_bdev1", 00:22:23.051 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:23.051 "strip_size_kb": 64, 00:22:23.051 "state": "online", 00:22:23.051 "raid_level": "raid5f", 00:22:23.051 "superblock": true, 00:22:23.051 "num_base_bdevs": 4, 00:22:23.051 "num_base_bdevs_discovered": 4, 00:22:23.051 "num_base_bdevs_operational": 4, 00:22:23.051 "process": { 00:22:23.051 "type": "rebuild", 00:22:23.051 "target": "spare", 00:22:23.051 "progress": { 00:22:23.051 "blocks": 17280, 00:22:23.051 "percent": 9 00:22:23.051 } 00:22:23.051 }, 00:22:23.051 "base_bdevs_list": [ 00:22:23.051 { 00:22:23.051 "name": "spare", 00:22:23.051 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:23.051 "is_configured": true, 00:22:23.051 "data_offset": 2048, 00:22:23.051 "data_size": 63488 00:22:23.051 }, 00:22:23.051 { 00:22:23.051 "name": "BaseBdev2", 00:22:23.051 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:23.051 "is_configured": true, 00:22:23.051 "data_offset": 2048, 00:22:23.051 "data_size": 63488 00:22:23.051 }, 00:22:23.051 { 00:22:23.051 "name": "BaseBdev3", 00:22:23.051 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:23.051 "is_configured": true, 00:22:23.051 "data_offset": 2048, 00:22:23.051 "data_size": 63488 00:22:23.051 }, 00:22:23.051 { 00:22:23.051 "name": "BaseBdev4", 00:22:23.051 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:23.051 "is_configured": true, 00:22:23.051 "data_offset": 2048, 00:22:23.051 "data_size": 63488 00:22:23.051 } 00:22:23.051 ] 00:22:23.051 }' 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.051 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.051 [2024-11-27 04:44:10.643411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:23.311 [2024-11-27 04:44:10.698268] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:23.311 [2024-11-27 04:44:10.698601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.311 [2024-11-27 04:44:10.698634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:23.311 [2024-11-27 04:44:10.698651] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.311 "name": "raid_bdev1", 00:22:23.311 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:23.311 "strip_size_kb": 64, 00:22:23.311 "state": "online", 00:22:23.311 "raid_level": "raid5f", 00:22:23.311 "superblock": true, 00:22:23.311 "num_base_bdevs": 4, 00:22:23.311 "num_base_bdevs_discovered": 3, 00:22:23.311 "num_base_bdevs_operational": 3, 00:22:23.311 "base_bdevs_list": [ 00:22:23.311 { 00:22:23.311 "name": null, 00:22:23.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.311 "is_configured": false, 00:22:23.311 "data_offset": 0, 00:22:23.311 "data_size": 63488 00:22:23.311 }, 00:22:23.311 { 00:22:23.311 "name": "BaseBdev2", 00:22:23.311 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:23.311 "is_configured": true, 00:22:23.311 "data_offset": 2048, 00:22:23.311 "data_size": 63488 00:22:23.311 }, 00:22:23.311 { 00:22:23.311 "name": "BaseBdev3", 00:22:23.311 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:23.311 "is_configured": true, 00:22:23.311 "data_offset": 2048, 00:22:23.311 "data_size": 63488 00:22:23.311 }, 00:22:23.311 { 00:22:23.311 "name": "BaseBdev4", 00:22:23.311 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:23.311 "is_configured": true, 00:22:23.311 "data_offset": 2048, 00:22:23.311 "data_size": 63488 00:22:23.311 } 00:22:23.311 ] 00:22:23.311 }' 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.311 04:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.876 "name": "raid_bdev1", 00:22:23.876 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:23.876 "strip_size_kb": 64, 00:22:23.876 "state": "online", 00:22:23.876 "raid_level": "raid5f", 00:22:23.876 "superblock": true, 00:22:23.876 "num_base_bdevs": 4, 00:22:23.876 "num_base_bdevs_discovered": 3, 00:22:23.876 "num_base_bdevs_operational": 3, 00:22:23.876 "base_bdevs_list": [ 00:22:23.876 { 00:22:23.876 "name": null, 00:22:23.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.876 "is_configured": false, 00:22:23.876 "data_offset": 0, 00:22:23.876 "data_size": 63488 00:22:23.876 }, 00:22:23.876 { 00:22:23.876 "name": "BaseBdev2", 00:22:23.876 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:23.876 "is_configured": true, 00:22:23.876 "data_offset": 2048, 00:22:23.876 "data_size": 63488 00:22:23.876 }, 00:22:23.876 { 00:22:23.876 "name": "BaseBdev3", 00:22:23.876 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:23.876 "is_configured": true, 00:22:23.876 "data_offset": 2048, 00:22:23.876 "data_size": 63488 00:22:23.876 }, 00:22:23.876 { 00:22:23.876 "name": "BaseBdev4", 00:22:23.876 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:23.876 "is_configured": true, 00:22:23.876 "data_offset": 2048, 00:22:23.876 "data_size": 63488 00:22:23.876 } 00:22:23.876 ] 00:22:23.876 }' 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.876 [2024-11-27 04:44:11.442161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:23.876 [2024-11-27 04:44:11.455682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.876 04:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:23.876 [2024-11-27 04:44:11.464446] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:25.247 "name": "raid_bdev1", 00:22:25.247 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:25.247 "strip_size_kb": 64, 00:22:25.247 "state": "online", 00:22:25.247 "raid_level": "raid5f", 00:22:25.247 "superblock": true, 00:22:25.247 "num_base_bdevs": 4, 00:22:25.247 "num_base_bdevs_discovered": 4, 00:22:25.247 "num_base_bdevs_operational": 4, 00:22:25.247 "process": { 00:22:25.247 "type": "rebuild", 00:22:25.247 "target": "spare", 00:22:25.247 "progress": { 00:22:25.247 "blocks": 17280, 00:22:25.247 "percent": 9 00:22:25.247 } 00:22:25.247 }, 00:22:25.247 "base_bdevs_list": [ 00:22:25.247 { 00:22:25.247 "name": "spare", 00:22:25.247 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:25.247 "is_configured": true, 00:22:25.247 "data_offset": 2048, 00:22:25.247 "data_size": 63488 00:22:25.247 }, 00:22:25.247 { 00:22:25.247 "name": "BaseBdev2", 00:22:25.247 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:25.247 "is_configured": true, 00:22:25.247 "data_offset": 2048, 00:22:25.247 "data_size": 63488 00:22:25.247 }, 00:22:25.247 { 00:22:25.247 "name": "BaseBdev3", 00:22:25.247 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:25.247 "is_configured": true, 00:22:25.247 "data_offset": 2048, 00:22:25.247 "data_size": 63488 00:22:25.247 }, 00:22:25.247 { 00:22:25.247 "name": "BaseBdev4", 00:22:25.247 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:25.247 "is_configured": true, 00:22:25.247 "data_offset": 2048, 00:22:25.247 "data_size": 63488 00:22:25.247 } 00:22:25.247 ] 00:22:25.247 }' 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:25.247 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=693 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.247 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:25.248 "name": "raid_bdev1", 00:22:25.248 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:25.248 "strip_size_kb": 64, 00:22:25.248 "state": "online", 00:22:25.248 "raid_level": "raid5f", 00:22:25.248 "superblock": true, 00:22:25.248 "num_base_bdevs": 4, 00:22:25.248 "num_base_bdevs_discovered": 4, 00:22:25.248 "num_base_bdevs_operational": 4, 00:22:25.248 "process": { 00:22:25.248 "type": "rebuild", 00:22:25.248 "target": "spare", 00:22:25.248 "progress": { 00:22:25.248 "blocks": 21120, 00:22:25.248 "percent": 11 00:22:25.248 } 00:22:25.248 }, 00:22:25.248 "base_bdevs_list": [ 00:22:25.248 { 00:22:25.248 "name": "spare", 00:22:25.248 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:25.248 "is_configured": true, 00:22:25.248 "data_offset": 2048, 00:22:25.248 "data_size": 63488 00:22:25.248 }, 00:22:25.248 { 00:22:25.248 "name": "BaseBdev2", 00:22:25.248 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:25.248 "is_configured": true, 00:22:25.248 "data_offset": 2048, 00:22:25.248 "data_size": 63488 00:22:25.248 }, 00:22:25.248 { 00:22:25.248 "name": "BaseBdev3", 00:22:25.248 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:25.248 "is_configured": true, 00:22:25.248 "data_offset": 2048, 00:22:25.248 "data_size": 63488 00:22:25.248 }, 00:22:25.248 { 00:22:25.248 "name": "BaseBdev4", 00:22:25.248 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:25.248 "is_configured": true, 00:22:25.248 "data_offset": 2048, 00:22:25.248 "data_size": 63488 00:22:25.248 } 00:22:25.248 ] 00:22:25.248 }' 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.248 04:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:26.182 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:26.182 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.182 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.182 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:26.182 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:26.182 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.182 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.182 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.182 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.182 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.440 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.440 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.440 "name": "raid_bdev1", 00:22:26.440 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:26.440 "strip_size_kb": 64, 00:22:26.440 "state": "online", 00:22:26.440 "raid_level": "raid5f", 00:22:26.440 "superblock": true, 00:22:26.440 "num_base_bdevs": 4, 00:22:26.440 "num_base_bdevs_discovered": 4, 00:22:26.440 "num_base_bdevs_operational": 4, 00:22:26.440 "process": { 00:22:26.440 "type": "rebuild", 00:22:26.440 "target": "spare", 00:22:26.440 "progress": { 00:22:26.440 "blocks": 44160, 00:22:26.440 "percent": 23 00:22:26.440 } 00:22:26.440 }, 00:22:26.440 "base_bdevs_list": [ 00:22:26.440 { 00:22:26.440 "name": "spare", 00:22:26.440 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:26.440 "is_configured": true, 00:22:26.440 "data_offset": 2048, 00:22:26.440 "data_size": 63488 00:22:26.440 }, 00:22:26.440 { 00:22:26.440 "name": "BaseBdev2", 00:22:26.440 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:26.440 "is_configured": true, 00:22:26.440 "data_offset": 2048, 00:22:26.440 "data_size": 63488 00:22:26.440 }, 00:22:26.440 { 00:22:26.440 "name": "BaseBdev3", 00:22:26.440 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:26.440 "is_configured": true, 00:22:26.440 "data_offset": 2048, 00:22:26.440 "data_size": 63488 00:22:26.440 }, 00:22:26.440 { 00:22:26.440 "name": "BaseBdev4", 00:22:26.440 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:26.440 "is_configured": true, 00:22:26.440 "data_offset": 2048, 00:22:26.440 "data_size": 63488 00:22:26.440 } 00:22:26.440 ] 00:22:26.440 }' 00:22:26.440 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.440 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.440 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.440 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.440 04:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.373 04:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.631 04:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:27.631 "name": "raid_bdev1", 00:22:27.631 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:27.631 "strip_size_kb": 64, 00:22:27.631 "state": "online", 00:22:27.631 "raid_level": "raid5f", 00:22:27.631 "superblock": true, 00:22:27.631 "num_base_bdevs": 4, 00:22:27.631 "num_base_bdevs_discovered": 4, 00:22:27.631 "num_base_bdevs_operational": 4, 00:22:27.631 "process": { 00:22:27.631 "type": "rebuild", 00:22:27.631 "target": "spare", 00:22:27.631 "progress": { 00:22:27.631 "blocks": 65280, 00:22:27.631 "percent": 34 00:22:27.631 } 00:22:27.631 }, 00:22:27.631 "base_bdevs_list": [ 00:22:27.631 { 00:22:27.631 "name": "spare", 00:22:27.631 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:27.631 "is_configured": true, 00:22:27.631 "data_offset": 2048, 00:22:27.631 "data_size": 63488 00:22:27.631 }, 00:22:27.631 { 00:22:27.631 "name": "BaseBdev2", 00:22:27.631 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:27.631 "is_configured": true, 00:22:27.631 "data_offset": 2048, 00:22:27.631 "data_size": 63488 00:22:27.631 }, 00:22:27.631 { 00:22:27.631 "name": "BaseBdev3", 00:22:27.631 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:27.631 "is_configured": true, 00:22:27.631 "data_offset": 2048, 00:22:27.631 "data_size": 63488 00:22:27.631 }, 00:22:27.631 { 00:22:27.631 "name": "BaseBdev4", 00:22:27.631 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:27.631 "is_configured": true, 00:22:27.631 "data_offset": 2048, 00:22:27.631 "data_size": 63488 00:22:27.631 } 00:22:27.631 ] 00:22:27.631 }' 00:22:27.631 04:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:27.631 04:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.631 04:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:27.631 04:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.631 04:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:28.617 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:28.617 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.617 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.617 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:28.617 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:28.617 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.617 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.617 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.618 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.618 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.618 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.618 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.618 "name": "raid_bdev1", 00:22:28.618 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:28.618 "strip_size_kb": 64, 00:22:28.618 "state": "online", 00:22:28.618 "raid_level": "raid5f", 00:22:28.618 "superblock": true, 00:22:28.618 "num_base_bdevs": 4, 00:22:28.618 "num_base_bdevs_discovered": 4, 00:22:28.618 "num_base_bdevs_operational": 4, 00:22:28.618 "process": { 00:22:28.618 "type": "rebuild", 00:22:28.618 "target": "spare", 00:22:28.618 "progress": { 00:22:28.618 "blocks": 88320, 00:22:28.618 "percent": 46 00:22:28.618 } 00:22:28.618 }, 00:22:28.618 "base_bdevs_list": [ 00:22:28.618 { 00:22:28.618 "name": "spare", 00:22:28.618 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:28.618 "is_configured": true, 00:22:28.618 "data_offset": 2048, 00:22:28.618 "data_size": 63488 00:22:28.618 }, 00:22:28.618 { 00:22:28.618 "name": "BaseBdev2", 00:22:28.618 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:28.618 "is_configured": true, 00:22:28.618 "data_offset": 2048, 00:22:28.618 "data_size": 63488 00:22:28.618 }, 00:22:28.618 { 00:22:28.618 "name": "BaseBdev3", 00:22:28.618 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:28.618 "is_configured": true, 00:22:28.618 "data_offset": 2048, 00:22:28.618 "data_size": 63488 00:22:28.618 }, 00:22:28.618 { 00:22:28.618 "name": "BaseBdev4", 00:22:28.618 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:28.618 "is_configured": true, 00:22:28.618 "data_offset": 2048, 00:22:28.618 "data_size": 63488 00:22:28.618 } 00:22:28.618 ] 00:22:28.618 }' 00:22:28.618 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.618 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.618 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.876 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.877 04:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:29.814 "name": "raid_bdev1", 00:22:29.814 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:29.814 "strip_size_kb": 64, 00:22:29.814 "state": "online", 00:22:29.814 "raid_level": "raid5f", 00:22:29.814 "superblock": true, 00:22:29.814 "num_base_bdevs": 4, 00:22:29.814 "num_base_bdevs_discovered": 4, 00:22:29.814 "num_base_bdevs_operational": 4, 00:22:29.814 "process": { 00:22:29.814 "type": "rebuild", 00:22:29.814 "target": "spare", 00:22:29.814 "progress": { 00:22:29.814 "blocks": 109440, 00:22:29.814 "percent": 57 00:22:29.814 } 00:22:29.814 }, 00:22:29.814 "base_bdevs_list": [ 00:22:29.814 { 00:22:29.814 "name": "spare", 00:22:29.814 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:29.814 "is_configured": true, 00:22:29.814 "data_offset": 2048, 00:22:29.814 "data_size": 63488 00:22:29.814 }, 00:22:29.814 { 00:22:29.814 "name": "BaseBdev2", 00:22:29.814 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:29.814 "is_configured": true, 00:22:29.814 "data_offset": 2048, 00:22:29.814 "data_size": 63488 00:22:29.814 }, 00:22:29.814 { 00:22:29.814 "name": "BaseBdev3", 00:22:29.814 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:29.814 "is_configured": true, 00:22:29.814 "data_offset": 2048, 00:22:29.814 "data_size": 63488 00:22:29.814 }, 00:22:29.814 { 00:22:29.814 "name": "BaseBdev4", 00:22:29.814 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:29.814 "is_configured": true, 00:22:29.814 "data_offset": 2048, 00:22:29.814 "data_size": 63488 00:22:29.814 } 00:22:29.814 ] 00:22:29.814 }' 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.814 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.072 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.072 04:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:31.007 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:31.007 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.007 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.007 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.008 "name": "raid_bdev1", 00:22:31.008 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:31.008 "strip_size_kb": 64, 00:22:31.008 "state": "online", 00:22:31.008 "raid_level": "raid5f", 00:22:31.008 "superblock": true, 00:22:31.008 "num_base_bdevs": 4, 00:22:31.008 "num_base_bdevs_discovered": 4, 00:22:31.008 "num_base_bdevs_operational": 4, 00:22:31.008 "process": { 00:22:31.008 "type": "rebuild", 00:22:31.008 "target": "spare", 00:22:31.008 "progress": { 00:22:31.008 "blocks": 132480, 00:22:31.008 "percent": 69 00:22:31.008 } 00:22:31.008 }, 00:22:31.008 "base_bdevs_list": [ 00:22:31.008 { 00:22:31.008 "name": "spare", 00:22:31.008 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:31.008 "is_configured": true, 00:22:31.008 "data_offset": 2048, 00:22:31.008 "data_size": 63488 00:22:31.008 }, 00:22:31.008 { 00:22:31.008 "name": "BaseBdev2", 00:22:31.008 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:31.008 "is_configured": true, 00:22:31.008 "data_offset": 2048, 00:22:31.008 "data_size": 63488 00:22:31.008 }, 00:22:31.008 { 00:22:31.008 "name": "BaseBdev3", 00:22:31.008 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:31.008 "is_configured": true, 00:22:31.008 "data_offset": 2048, 00:22:31.008 "data_size": 63488 00:22:31.008 }, 00:22:31.008 { 00:22:31.008 "name": "BaseBdev4", 00:22:31.008 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:31.008 "is_configured": true, 00:22:31.008 "data_offset": 2048, 00:22:31.008 "data_size": 63488 00:22:31.008 } 00:22:31.008 ] 00:22:31.008 }' 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.008 04:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:32.381 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:32.381 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.381 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:32.381 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:32.381 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:32.381 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:32.381 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.381 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.382 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.382 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.382 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.382 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:32.382 "name": "raid_bdev1", 00:22:32.382 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:32.382 "strip_size_kb": 64, 00:22:32.382 "state": "online", 00:22:32.382 "raid_level": "raid5f", 00:22:32.382 "superblock": true, 00:22:32.382 "num_base_bdevs": 4, 00:22:32.382 "num_base_bdevs_discovered": 4, 00:22:32.382 "num_base_bdevs_operational": 4, 00:22:32.382 "process": { 00:22:32.382 "type": "rebuild", 00:22:32.382 "target": "spare", 00:22:32.382 "progress": { 00:22:32.382 "blocks": 153600, 00:22:32.382 "percent": 80 00:22:32.382 } 00:22:32.382 }, 00:22:32.382 "base_bdevs_list": [ 00:22:32.382 { 00:22:32.382 "name": "spare", 00:22:32.382 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:32.382 "is_configured": true, 00:22:32.382 "data_offset": 2048, 00:22:32.382 "data_size": 63488 00:22:32.382 }, 00:22:32.382 { 00:22:32.382 "name": "BaseBdev2", 00:22:32.382 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:32.382 "is_configured": true, 00:22:32.382 "data_offset": 2048, 00:22:32.382 "data_size": 63488 00:22:32.382 }, 00:22:32.382 { 00:22:32.382 "name": "BaseBdev3", 00:22:32.382 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:32.382 "is_configured": true, 00:22:32.382 "data_offset": 2048, 00:22:32.382 "data_size": 63488 00:22:32.382 }, 00:22:32.382 { 00:22:32.382 "name": "BaseBdev4", 00:22:32.382 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:32.382 "is_configured": true, 00:22:32.382 "data_offset": 2048, 00:22:32.382 "data_size": 63488 00:22:32.382 } 00:22:32.382 ] 00:22:32.382 }' 00:22:32.382 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:32.382 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.382 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:32.382 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.382 04:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:33.316 "name": "raid_bdev1", 00:22:33.316 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:33.316 "strip_size_kb": 64, 00:22:33.316 "state": "online", 00:22:33.316 "raid_level": "raid5f", 00:22:33.316 "superblock": true, 00:22:33.316 "num_base_bdevs": 4, 00:22:33.316 "num_base_bdevs_discovered": 4, 00:22:33.316 "num_base_bdevs_operational": 4, 00:22:33.316 "process": { 00:22:33.316 "type": "rebuild", 00:22:33.316 "target": "spare", 00:22:33.316 "progress": { 00:22:33.316 "blocks": 174720, 00:22:33.316 "percent": 91 00:22:33.316 } 00:22:33.316 }, 00:22:33.316 "base_bdevs_list": [ 00:22:33.316 { 00:22:33.316 "name": "spare", 00:22:33.316 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:33.316 "is_configured": true, 00:22:33.316 "data_offset": 2048, 00:22:33.316 "data_size": 63488 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "name": "BaseBdev2", 00:22:33.316 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:33.316 "is_configured": true, 00:22:33.316 "data_offset": 2048, 00:22:33.316 "data_size": 63488 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "name": "BaseBdev3", 00:22:33.316 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:33.316 "is_configured": true, 00:22:33.316 "data_offset": 2048, 00:22:33.316 "data_size": 63488 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "name": "BaseBdev4", 00:22:33.316 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:33.316 "is_configured": true, 00:22:33.316 "data_offset": 2048, 00:22:33.316 "data_size": 63488 00:22:33.316 } 00:22:33.316 ] 00:22:33.316 }' 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.316 04:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:34.251 [2024-11-27 04:44:21.570903] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:34.251 [2024-11-27 04:44:21.570992] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:34.251 [2024-11-27 04:44:21.571212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:34.510 "name": "raid_bdev1", 00:22:34.510 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:34.510 "strip_size_kb": 64, 00:22:34.510 "state": "online", 00:22:34.510 "raid_level": "raid5f", 00:22:34.510 "superblock": true, 00:22:34.510 "num_base_bdevs": 4, 00:22:34.510 "num_base_bdevs_discovered": 4, 00:22:34.510 "num_base_bdevs_operational": 4, 00:22:34.510 "base_bdevs_list": [ 00:22:34.510 { 00:22:34.510 "name": "spare", 00:22:34.510 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:34.510 "is_configured": true, 00:22:34.510 "data_offset": 2048, 00:22:34.510 "data_size": 63488 00:22:34.510 }, 00:22:34.510 { 00:22:34.510 "name": "BaseBdev2", 00:22:34.510 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:34.510 "is_configured": true, 00:22:34.510 "data_offset": 2048, 00:22:34.510 "data_size": 63488 00:22:34.510 }, 00:22:34.510 { 00:22:34.510 "name": "BaseBdev3", 00:22:34.510 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:34.510 "is_configured": true, 00:22:34.510 "data_offset": 2048, 00:22:34.510 "data_size": 63488 00:22:34.510 }, 00:22:34.510 { 00:22:34.510 "name": "BaseBdev4", 00:22:34.510 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:34.510 "is_configured": true, 00:22:34.510 "data_offset": 2048, 00:22:34.510 "data_size": 63488 00:22:34.510 } 00:22:34.510 ] 00:22:34.510 }' 00:22:34.510 04:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:34.510 "name": "raid_bdev1", 00:22:34.510 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:34.510 "strip_size_kb": 64, 00:22:34.510 "state": "online", 00:22:34.510 "raid_level": "raid5f", 00:22:34.510 "superblock": true, 00:22:34.510 "num_base_bdevs": 4, 00:22:34.510 "num_base_bdevs_discovered": 4, 00:22:34.510 "num_base_bdevs_operational": 4, 00:22:34.510 "base_bdevs_list": [ 00:22:34.510 { 00:22:34.510 "name": "spare", 00:22:34.510 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:34.510 "is_configured": true, 00:22:34.510 "data_offset": 2048, 00:22:34.510 "data_size": 63488 00:22:34.510 }, 00:22:34.510 { 00:22:34.510 "name": "BaseBdev2", 00:22:34.510 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:34.510 "is_configured": true, 00:22:34.510 "data_offset": 2048, 00:22:34.510 "data_size": 63488 00:22:34.510 }, 00:22:34.510 { 00:22:34.510 "name": "BaseBdev3", 00:22:34.510 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:34.510 "is_configured": true, 00:22:34.510 "data_offset": 2048, 00:22:34.510 "data_size": 63488 00:22:34.510 }, 00:22:34.510 { 00:22:34.510 "name": "BaseBdev4", 00:22:34.510 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:34.510 "is_configured": true, 00:22:34.510 "data_offset": 2048, 00:22:34.510 "data_size": 63488 00:22:34.510 } 00:22:34.510 ] 00:22:34.510 }' 00:22:34.510 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.769 "name": "raid_bdev1", 00:22:34.769 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:34.769 "strip_size_kb": 64, 00:22:34.769 "state": "online", 00:22:34.769 "raid_level": "raid5f", 00:22:34.769 "superblock": true, 00:22:34.769 "num_base_bdevs": 4, 00:22:34.769 "num_base_bdevs_discovered": 4, 00:22:34.769 "num_base_bdevs_operational": 4, 00:22:34.769 "base_bdevs_list": [ 00:22:34.769 { 00:22:34.769 "name": "spare", 00:22:34.769 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:34.769 "is_configured": true, 00:22:34.769 "data_offset": 2048, 00:22:34.769 "data_size": 63488 00:22:34.769 }, 00:22:34.769 { 00:22:34.769 "name": "BaseBdev2", 00:22:34.769 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:34.769 "is_configured": true, 00:22:34.769 "data_offset": 2048, 00:22:34.769 "data_size": 63488 00:22:34.769 }, 00:22:34.769 { 00:22:34.769 "name": "BaseBdev3", 00:22:34.769 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:34.769 "is_configured": true, 00:22:34.769 "data_offset": 2048, 00:22:34.769 "data_size": 63488 00:22:34.769 }, 00:22:34.769 { 00:22:34.769 "name": "BaseBdev4", 00:22:34.769 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:34.769 "is_configured": true, 00:22:34.769 "data_offset": 2048, 00:22:34.769 "data_size": 63488 00:22:34.769 } 00:22:34.769 ] 00:22:34.769 }' 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.769 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.337 [2024-11-27 04:44:22.788095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:35.337 [2024-11-27 04:44:22.788162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.337 [2024-11-27 04:44:22.788266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.337 [2024-11-27 04:44:22.788384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.337 [2024-11-27 04:44:22.788419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:35.337 04:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:35.596 /dev/nbd0 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:35.596 1+0 records in 00:22:35.596 1+0 records out 00:22:35.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369782 s, 11.1 MB/s 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:35.596 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:36.164 /dev/nbd1 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.164 1+0 records in 00:22:36.164 1+0 records out 00:22:36.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317979 s, 12.9 MB/s 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.164 04:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.732 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.992 [2024-11-27 04:44:24.357364] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:36.992 [2024-11-27 04:44:24.357434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.992 [2024-11-27 04:44:24.357464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:36.992 [2024-11-27 04:44:24.357478] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.992 [2024-11-27 04:44:24.360420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.992 [2024-11-27 04:44:24.360464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:36.992 [2024-11-27 04:44:24.360599] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:36.992 [2024-11-27 04:44:24.360665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:36.992 [2024-11-27 04:44:24.360871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:36.992 [2024-11-27 04:44:24.361011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:36.992 spare 00:22:36.992 [2024-11-27 04:44:24.361129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.992 [2024-11-27 04:44:24.461267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:36.992 [2024-11-27 04:44:24.461474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:36.992 [2024-11-27 04:44:24.461913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:22:36.992 [2024-11-27 04:44:24.468581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:36.992 [2024-11-27 04:44:24.468716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:36.992 [2024-11-27 04:44:24.469174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.992 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.992 "name": "raid_bdev1", 00:22:36.992 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:36.992 "strip_size_kb": 64, 00:22:36.992 "state": "online", 00:22:36.992 "raid_level": "raid5f", 00:22:36.992 "superblock": true, 00:22:36.992 "num_base_bdevs": 4, 00:22:36.992 "num_base_bdevs_discovered": 4, 00:22:36.992 "num_base_bdevs_operational": 4, 00:22:36.992 "base_bdevs_list": [ 00:22:36.992 { 00:22:36.992 "name": "spare", 00:22:36.992 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:36.992 "is_configured": true, 00:22:36.992 "data_offset": 2048, 00:22:36.992 "data_size": 63488 00:22:36.992 }, 00:22:36.992 { 00:22:36.992 "name": "BaseBdev2", 00:22:36.992 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:36.992 "is_configured": true, 00:22:36.992 "data_offset": 2048, 00:22:36.992 "data_size": 63488 00:22:36.992 }, 00:22:36.992 { 00:22:36.992 "name": "BaseBdev3", 00:22:36.992 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:36.992 "is_configured": true, 00:22:36.992 "data_offset": 2048, 00:22:36.992 "data_size": 63488 00:22:36.992 }, 00:22:36.992 { 00:22:36.992 "name": "BaseBdev4", 00:22:36.992 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:36.992 "is_configured": true, 00:22:36.992 "data_offset": 2048, 00:22:36.992 "data_size": 63488 00:22:36.992 } 00:22:36.993 ] 00:22:36.993 }' 00:22:36.993 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.993 04:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:37.560 "name": "raid_bdev1", 00:22:37.560 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:37.560 "strip_size_kb": 64, 00:22:37.560 "state": "online", 00:22:37.560 "raid_level": "raid5f", 00:22:37.560 "superblock": true, 00:22:37.560 "num_base_bdevs": 4, 00:22:37.560 "num_base_bdevs_discovered": 4, 00:22:37.560 "num_base_bdevs_operational": 4, 00:22:37.560 "base_bdevs_list": [ 00:22:37.560 { 00:22:37.560 "name": "spare", 00:22:37.560 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:37.560 "is_configured": true, 00:22:37.560 "data_offset": 2048, 00:22:37.560 "data_size": 63488 00:22:37.560 }, 00:22:37.560 { 00:22:37.560 "name": "BaseBdev2", 00:22:37.560 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:37.560 "is_configured": true, 00:22:37.560 "data_offset": 2048, 00:22:37.560 "data_size": 63488 00:22:37.560 }, 00:22:37.560 { 00:22:37.560 "name": "BaseBdev3", 00:22:37.560 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:37.560 "is_configured": true, 00:22:37.560 "data_offset": 2048, 00:22:37.560 "data_size": 63488 00:22:37.560 }, 00:22:37.560 { 00:22:37.560 "name": "BaseBdev4", 00:22:37.560 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:37.560 "is_configured": true, 00:22:37.560 "data_offset": 2048, 00:22:37.560 "data_size": 63488 00:22:37.560 } 00:22:37.560 ] 00:22:37.560 }' 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.560 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.820 [2024-11-27 04:44:25.220839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.820 "name": "raid_bdev1", 00:22:37.820 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:37.820 "strip_size_kb": 64, 00:22:37.820 "state": "online", 00:22:37.820 "raid_level": "raid5f", 00:22:37.820 "superblock": true, 00:22:37.820 "num_base_bdevs": 4, 00:22:37.820 "num_base_bdevs_discovered": 3, 00:22:37.820 "num_base_bdevs_operational": 3, 00:22:37.820 "base_bdevs_list": [ 00:22:37.820 { 00:22:37.820 "name": null, 00:22:37.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.820 "is_configured": false, 00:22:37.820 "data_offset": 0, 00:22:37.820 "data_size": 63488 00:22:37.820 }, 00:22:37.820 { 00:22:37.820 "name": "BaseBdev2", 00:22:37.820 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:37.820 "is_configured": true, 00:22:37.820 "data_offset": 2048, 00:22:37.820 "data_size": 63488 00:22:37.820 }, 00:22:37.820 { 00:22:37.820 "name": "BaseBdev3", 00:22:37.820 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:37.820 "is_configured": true, 00:22:37.820 "data_offset": 2048, 00:22:37.820 "data_size": 63488 00:22:37.820 }, 00:22:37.820 { 00:22:37.820 "name": "BaseBdev4", 00:22:37.820 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:37.820 "is_configured": true, 00:22:37.820 "data_offset": 2048, 00:22:37.820 "data_size": 63488 00:22:37.820 } 00:22:37.820 ] 00:22:37.820 }' 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.820 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.418 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:38.418 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.418 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.418 [2024-11-27 04:44:25.749055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:38.418 [2024-11-27 04:44:25.749349] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:38.418 [2024-11-27 04:44:25.749381] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:38.418 [2024-11-27 04:44:25.749430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:38.418 [2024-11-27 04:44:25.763311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:22:38.418 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.418 04:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:38.418 [2024-11-27 04:44:25.772403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:39.354 "name": "raid_bdev1", 00:22:39.354 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:39.354 "strip_size_kb": 64, 00:22:39.354 "state": "online", 00:22:39.354 "raid_level": "raid5f", 00:22:39.354 "superblock": true, 00:22:39.354 "num_base_bdevs": 4, 00:22:39.354 "num_base_bdevs_discovered": 4, 00:22:39.354 "num_base_bdevs_operational": 4, 00:22:39.354 "process": { 00:22:39.354 "type": "rebuild", 00:22:39.354 "target": "spare", 00:22:39.354 "progress": { 00:22:39.354 "blocks": 17280, 00:22:39.354 "percent": 9 00:22:39.354 } 00:22:39.354 }, 00:22:39.354 "base_bdevs_list": [ 00:22:39.354 { 00:22:39.354 "name": "spare", 00:22:39.354 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:39.354 "is_configured": true, 00:22:39.354 "data_offset": 2048, 00:22:39.354 "data_size": 63488 00:22:39.354 }, 00:22:39.354 { 00:22:39.354 "name": "BaseBdev2", 00:22:39.354 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:39.354 "is_configured": true, 00:22:39.354 "data_offset": 2048, 00:22:39.354 "data_size": 63488 00:22:39.354 }, 00:22:39.354 { 00:22:39.354 "name": "BaseBdev3", 00:22:39.354 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:39.354 "is_configured": true, 00:22:39.354 "data_offset": 2048, 00:22:39.354 "data_size": 63488 00:22:39.354 }, 00:22:39.354 { 00:22:39.354 "name": "BaseBdev4", 00:22:39.354 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:39.354 "is_configured": true, 00:22:39.354 "data_offset": 2048, 00:22:39.354 "data_size": 63488 00:22:39.354 } 00:22:39.354 ] 00:22:39.354 }' 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.354 04:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.354 [2024-11-27 04:44:26.942155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:39.613 [2024-11-27 04:44:26.984849] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:39.613 [2024-11-27 04:44:26.985110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.613 [2024-11-27 04:44:26.985142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:39.613 [2024-11-27 04:44:26.985162] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.613 "name": "raid_bdev1", 00:22:39.613 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:39.613 "strip_size_kb": 64, 00:22:39.613 "state": "online", 00:22:39.613 "raid_level": "raid5f", 00:22:39.613 "superblock": true, 00:22:39.613 "num_base_bdevs": 4, 00:22:39.613 "num_base_bdevs_discovered": 3, 00:22:39.613 "num_base_bdevs_operational": 3, 00:22:39.613 "base_bdevs_list": [ 00:22:39.613 { 00:22:39.613 "name": null, 00:22:39.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.613 "is_configured": false, 00:22:39.613 "data_offset": 0, 00:22:39.613 "data_size": 63488 00:22:39.613 }, 00:22:39.613 { 00:22:39.613 "name": "BaseBdev2", 00:22:39.613 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:39.613 "is_configured": true, 00:22:39.613 "data_offset": 2048, 00:22:39.613 "data_size": 63488 00:22:39.613 }, 00:22:39.613 { 00:22:39.613 "name": "BaseBdev3", 00:22:39.613 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:39.613 "is_configured": true, 00:22:39.613 "data_offset": 2048, 00:22:39.613 "data_size": 63488 00:22:39.613 }, 00:22:39.613 { 00:22:39.613 "name": "BaseBdev4", 00:22:39.613 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:39.613 "is_configured": true, 00:22:39.613 "data_offset": 2048, 00:22:39.613 "data_size": 63488 00:22:39.613 } 00:22:39.613 ] 00:22:39.613 }' 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.613 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.181 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:40.181 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.181 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.181 [2024-11-27 04:44:27.507967] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:40.181 [2024-11-27 04:44:27.508183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.181 [2024-11-27 04:44:27.508230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:40.181 [2024-11-27 04:44:27.508250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.181 [2024-11-27 04:44:27.508935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.181 [2024-11-27 04:44:27.508976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:40.181 [2024-11-27 04:44:27.509097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:40.181 [2024-11-27 04:44:27.509272] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:40.181 [2024-11-27 04:44:27.509294] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:40.181 [2024-11-27 04:44:27.509344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:40.181 [2024-11-27 04:44:27.522857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:22:40.181 spare 00:22:40.181 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.181 04:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:40.181 [2024-11-27 04:44:27.531491] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:41.117 "name": "raid_bdev1", 00:22:41.117 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:41.117 "strip_size_kb": 64, 00:22:41.117 "state": "online", 00:22:41.117 "raid_level": "raid5f", 00:22:41.117 "superblock": true, 00:22:41.117 "num_base_bdevs": 4, 00:22:41.117 "num_base_bdevs_discovered": 4, 00:22:41.117 "num_base_bdevs_operational": 4, 00:22:41.117 "process": { 00:22:41.117 "type": "rebuild", 00:22:41.117 "target": "spare", 00:22:41.117 "progress": { 00:22:41.117 "blocks": 17280, 00:22:41.117 "percent": 9 00:22:41.117 } 00:22:41.117 }, 00:22:41.117 "base_bdevs_list": [ 00:22:41.117 { 00:22:41.117 "name": "spare", 00:22:41.117 "uuid": "1ba1a8e5-c8ec-53df-882c-c81c99b7d9a9", 00:22:41.117 "is_configured": true, 00:22:41.117 "data_offset": 2048, 00:22:41.117 "data_size": 63488 00:22:41.117 }, 00:22:41.117 { 00:22:41.117 "name": "BaseBdev2", 00:22:41.117 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:41.117 "is_configured": true, 00:22:41.117 "data_offset": 2048, 00:22:41.117 "data_size": 63488 00:22:41.117 }, 00:22:41.117 { 00:22:41.117 "name": "BaseBdev3", 00:22:41.117 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:41.117 "is_configured": true, 00:22:41.117 "data_offset": 2048, 00:22:41.117 "data_size": 63488 00:22:41.117 }, 00:22:41.117 { 00:22:41.117 "name": "BaseBdev4", 00:22:41.117 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:41.117 "is_configured": true, 00:22:41.117 "data_offset": 2048, 00:22:41.117 "data_size": 63488 00:22:41.117 } 00:22:41.117 ] 00:22:41.117 }' 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:41.117 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:41.118 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:41.118 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:41.118 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.118 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.118 [2024-11-27 04:44:28.689210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:41.377 [2024-11-27 04:44:28.744262] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:41.377 [2024-11-27 04:44:28.744490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.377 [2024-11-27 04:44:28.744529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:41.377 [2024-11-27 04:44:28.744542] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.377 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.377 "name": "raid_bdev1", 00:22:41.377 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:41.377 "strip_size_kb": 64, 00:22:41.377 "state": "online", 00:22:41.377 "raid_level": "raid5f", 00:22:41.377 "superblock": true, 00:22:41.378 "num_base_bdevs": 4, 00:22:41.378 "num_base_bdevs_discovered": 3, 00:22:41.378 "num_base_bdevs_operational": 3, 00:22:41.378 "base_bdevs_list": [ 00:22:41.378 { 00:22:41.378 "name": null, 00:22:41.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.378 "is_configured": false, 00:22:41.378 "data_offset": 0, 00:22:41.378 "data_size": 63488 00:22:41.378 }, 00:22:41.378 { 00:22:41.378 "name": "BaseBdev2", 00:22:41.378 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:41.378 "is_configured": true, 00:22:41.378 "data_offset": 2048, 00:22:41.378 "data_size": 63488 00:22:41.378 }, 00:22:41.378 { 00:22:41.378 "name": "BaseBdev3", 00:22:41.378 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:41.378 "is_configured": true, 00:22:41.378 "data_offset": 2048, 00:22:41.378 "data_size": 63488 00:22:41.378 }, 00:22:41.378 { 00:22:41.378 "name": "BaseBdev4", 00:22:41.378 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:41.378 "is_configured": true, 00:22:41.378 "data_offset": 2048, 00:22:41.378 "data_size": 63488 00:22:41.378 } 00:22:41.378 ] 00:22:41.378 }' 00:22:41.378 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.378 04:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.636 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:41.636 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:41.636 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:41.636 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:41.636 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:41.636 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.636 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.636 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.636 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:41.894 "name": "raid_bdev1", 00:22:41.894 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:41.894 "strip_size_kb": 64, 00:22:41.894 "state": "online", 00:22:41.894 "raid_level": "raid5f", 00:22:41.894 "superblock": true, 00:22:41.894 "num_base_bdevs": 4, 00:22:41.894 "num_base_bdevs_discovered": 3, 00:22:41.894 "num_base_bdevs_operational": 3, 00:22:41.894 "base_bdevs_list": [ 00:22:41.894 { 00:22:41.894 "name": null, 00:22:41.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.894 "is_configured": false, 00:22:41.894 "data_offset": 0, 00:22:41.894 "data_size": 63488 00:22:41.894 }, 00:22:41.894 { 00:22:41.894 "name": "BaseBdev2", 00:22:41.894 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:41.894 "is_configured": true, 00:22:41.894 "data_offset": 2048, 00:22:41.894 "data_size": 63488 00:22:41.894 }, 00:22:41.894 { 00:22:41.894 "name": "BaseBdev3", 00:22:41.894 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:41.894 "is_configured": true, 00:22:41.894 "data_offset": 2048, 00:22:41.894 "data_size": 63488 00:22:41.894 }, 00:22:41.894 { 00:22:41.894 "name": "BaseBdev4", 00:22:41.894 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:41.894 "is_configured": true, 00:22:41.894 "data_offset": 2048, 00:22:41.894 "data_size": 63488 00:22:41.894 } 00:22:41.894 ] 00:22:41.894 }' 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.894 [2024-11-27 04:44:29.411566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:41.894 [2024-11-27 04:44:29.411765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.894 [2024-11-27 04:44:29.411825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:41.894 [2024-11-27 04:44:29.411842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.894 [2024-11-27 04:44:29.412424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.894 [2024-11-27 04:44:29.412466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:41.894 [2024-11-27 04:44:29.412574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:41.894 [2024-11-27 04:44:29.412596] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:41.894 [2024-11-27 04:44:29.412613] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:41.894 [2024-11-27 04:44:29.412626] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:41.894 BaseBdev1 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.894 04:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.829 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.087 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.087 "name": "raid_bdev1", 00:22:43.087 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:43.087 "strip_size_kb": 64, 00:22:43.087 "state": "online", 00:22:43.087 "raid_level": "raid5f", 00:22:43.087 "superblock": true, 00:22:43.087 "num_base_bdevs": 4, 00:22:43.087 "num_base_bdevs_discovered": 3, 00:22:43.087 "num_base_bdevs_operational": 3, 00:22:43.087 "base_bdevs_list": [ 00:22:43.087 { 00:22:43.087 "name": null, 00:22:43.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.087 "is_configured": false, 00:22:43.087 "data_offset": 0, 00:22:43.087 "data_size": 63488 00:22:43.087 }, 00:22:43.087 { 00:22:43.087 "name": "BaseBdev2", 00:22:43.087 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:43.087 "is_configured": true, 00:22:43.087 "data_offset": 2048, 00:22:43.087 "data_size": 63488 00:22:43.087 }, 00:22:43.087 { 00:22:43.087 "name": "BaseBdev3", 00:22:43.087 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:43.087 "is_configured": true, 00:22:43.087 "data_offset": 2048, 00:22:43.087 "data_size": 63488 00:22:43.087 }, 00:22:43.087 { 00:22:43.087 "name": "BaseBdev4", 00:22:43.087 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:43.087 "is_configured": true, 00:22:43.087 "data_offset": 2048, 00:22:43.087 "data_size": 63488 00:22:43.087 } 00:22:43.087 ] 00:22:43.087 }' 00:22:43.087 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.087 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.346 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:43.346 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:43.346 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:43.346 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:43.346 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:43.346 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.346 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.346 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.346 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.346 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.347 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:43.347 "name": "raid_bdev1", 00:22:43.347 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:43.347 "strip_size_kb": 64, 00:22:43.347 "state": "online", 00:22:43.347 "raid_level": "raid5f", 00:22:43.347 "superblock": true, 00:22:43.347 "num_base_bdevs": 4, 00:22:43.347 "num_base_bdevs_discovered": 3, 00:22:43.347 "num_base_bdevs_operational": 3, 00:22:43.347 "base_bdevs_list": [ 00:22:43.347 { 00:22:43.347 "name": null, 00:22:43.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.347 "is_configured": false, 00:22:43.347 "data_offset": 0, 00:22:43.347 "data_size": 63488 00:22:43.347 }, 00:22:43.347 { 00:22:43.347 "name": "BaseBdev2", 00:22:43.347 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:43.347 "is_configured": true, 00:22:43.347 "data_offset": 2048, 00:22:43.347 "data_size": 63488 00:22:43.347 }, 00:22:43.347 { 00:22:43.347 "name": "BaseBdev3", 00:22:43.347 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:43.347 "is_configured": true, 00:22:43.347 "data_offset": 2048, 00:22:43.347 "data_size": 63488 00:22:43.347 }, 00:22:43.347 { 00:22:43.347 "name": "BaseBdev4", 00:22:43.347 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:43.347 "is_configured": true, 00:22:43.347 "data_offset": 2048, 00:22:43.347 "data_size": 63488 00:22:43.347 } 00:22:43.347 ] 00:22:43.347 }' 00:22:43.605 04:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.605 [2024-11-27 04:44:31.076188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.605 [2024-11-27 04:44:31.076392] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:43.605 [2024-11-27 04:44:31.076414] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:43.605 request: 00:22:43.605 { 00:22:43.605 "base_bdev": "BaseBdev1", 00:22:43.605 "raid_bdev": "raid_bdev1", 00:22:43.605 "method": "bdev_raid_add_base_bdev", 00:22:43.605 "req_id": 1 00:22:43.605 } 00:22:43.605 Got JSON-RPC error response 00:22:43.605 response: 00:22:43.605 { 00:22:43.605 "code": -22, 00:22:43.605 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:43.605 } 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.605 04:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.540 "name": "raid_bdev1", 00:22:44.540 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:44.540 "strip_size_kb": 64, 00:22:44.540 "state": "online", 00:22:44.540 "raid_level": "raid5f", 00:22:44.540 "superblock": true, 00:22:44.540 "num_base_bdevs": 4, 00:22:44.540 "num_base_bdevs_discovered": 3, 00:22:44.540 "num_base_bdevs_operational": 3, 00:22:44.540 "base_bdevs_list": [ 00:22:44.540 { 00:22:44.540 "name": null, 00:22:44.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.540 "is_configured": false, 00:22:44.540 "data_offset": 0, 00:22:44.540 "data_size": 63488 00:22:44.540 }, 00:22:44.540 { 00:22:44.540 "name": "BaseBdev2", 00:22:44.540 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:44.540 "is_configured": true, 00:22:44.540 "data_offset": 2048, 00:22:44.540 "data_size": 63488 00:22:44.540 }, 00:22:44.540 { 00:22:44.540 "name": "BaseBdev3", 00:22:44.540 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:44.540 "is_configured": true, 00:22:44.540 "data_offset": 2048, 00:22:44.540 "data_size": 63488 00:22:44.540 }, 00:22:44.540 { 00:22:44.540 "name": "BaseBdev4", 00:22:44.540 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:44.540 "is_configured": true, 00:22:44.540 "data_offset": 2048, 00:22:44.540 "data_size": 63488 00:22:44.540 } 00:22:44.540 ] 00:22:44.540 }' 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.540 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:45.107 "name": "raid_bdev1", 00:22:45.107 "uuid": "c77bda75-d19a-4d4e-b78a-e94b63c0372e", 00:22:45.107 "strip_size_kb": 64, 00:22:45.107 "state": "online", 00:22:45.107 "raid_level": "raid5f", 00:22:45.107 "superblock": true, 00:22:45.107 "num_base_bdevs": 4, 00:22:45.107 "num_base_bdevs_discovered": 3, 00:22:45.107 "num_base_bdevs_operational": 3, 00:22:45.107 "base_bdevs_list": [ 00:22:45.107 { 00:22:45.107 "name": null, 00:22:45.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.107 "is_configured": false, 00:22:45.107 "data_offset": 0, 00:22:45.107 "data_size": 63488 00:22:45.107 }, 00:22:45.107 { 00:22:45.107 "name": "BaseBdev2", 00:22:45.107 "uuid": "1f0130a2-4f02-58ba-8b7b-1e0a9f041134", 00:22:45.107 "is_configured": true, 00:22:45.107 "data_offset": 2048, 00:22:45.107 "data_size": 63488 00:22:45.107 }, 00:22:45.107 { 00:22:45.107 "name": "BaseBdev3", 00:22:45.107 "uuid": "586010dc-e604-5eed-8216-ab9813663f31", 00:22:45.107 "is_configured": true, 00:22:45.107 "data_offset": 2048, 00:22:45.107 "data_size": 63488 00:22:45.107 }, 00:22:45.107 { 00:22:45.107 "name": "BaseBdev4", 00:22:45.107 "uuid": "3694f4d8-9ef9-5403-80dc-a6cc6ae5424f", 00:22:45.107 "is_configured": true, 00:22:45.107 "data_offset": 2048, 00:22:45.107 "data_size": 63488 00:22:45.107 } 00:22:45.107 ] 00:22:45.107 }' 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:45.107 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85670 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85670 ']' 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85670 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85670 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:45.366 killing process with pid 85670 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85670' 00:22:45.366 Received shutdown signal, test time was about 60.000000 seconds 00:22:45.366 00:22:45.366 Latency(us) 00:22:45.366 [2024-11-27T04:44:32.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.366 [2024-11-27T04:44:32.989Z] =================================================================================================================== 00:22:45.366 [2024-11-27T04:44:32.989Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85670 00:22:45.366 [2024-11-27 04:44:32.799619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:45.366 04:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85670 00:22:45.366 [2024-11-27 04:44:32.799794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.366 [2024-11-27 04:44:32.799901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.366 [2024-11-27 04:44:32.799933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:45.625 [2024-11-27 04:44:33.243072] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:46.999 04:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:22:46.999 00:22:46.999 real 0m28.663s 00:22:46.999 user 0m37.401s 00:22:46.999 sys 0m2.817s 00:22:46.999 04:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.999 04:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.999 ************************************ 00:22:46.999 END TEST raid5f_rebuild_test_sb 00:22:46.999 ************************************ 00:22:46.999 04:44:34 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:22:46.999 04:44:34 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:22:46.999 04:44:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:46.999 04:44:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.999 04:44:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:46.999 ************************************ 00:22:46.999 START TEST raid_state_function_test_sb_4k 00:22:46.999 ************************************ 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:46.999 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86492 00:22:46.999 Process raid pid: 86492 00:22:47.000 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86492' 00:22:47.000 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86492 00:22:47.000 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:47.000 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86492 ']' 00:22:47.000 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.000 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.000 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.000 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.000 04:44:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.000 [2024-11-27 04:44:34.454649] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:47.000 [2024-11-27 04:44:34.454892] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.258 [2024-11-27 04:44:34.636200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.258 [2024-11-27 04:44:34.771995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.529 [2024-11-27 04:44:34.981867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.529 [2024-11-27 04:44:34.981924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.787 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.787 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:22:47.787 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:47.787 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.787 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 [2024-11-27 04:44:35.410635] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:48.045 [2024-11-27 04:44:35.410697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:48.045 [2024-11-27 04:44:35.410714] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:48.045 [2024-11-27 04:44:35.410731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.045 "name": "Existed_Raid", 00:22:48.045 "uuid": "a77e8072-e33e-4aa1-8e7f-a9d8f3916cdf", 00:22:48.045 "strip_size_kb": 0, 00:22:48.045 "state": "configuring", 00:22:48.045 "raid_level": "raid1", 00:22:48.045 "superblock": true, 00:22:48.045 "num_base_bdevs": 2, 00:22:48.045 "num_base_bdevs_discovered": 0, 00:22:48.045 "num_base_bdevs_operational": 2, 00:22:48.045 "base_bdevs_list": [ 00:22:48.045 { 00:22:48.045 "name": "BaseBdev1", 00:22:48.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.045 "is_configured": false, 00:22:48.045 "data_offset": 0, 00:22:48.045 "data_size": 0 00:22:48.045 }, 00:22:48.045 { 00:22:48.045 "name": "BaseBdev2", 00:22:48.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.045 "is_configured": false, 00:22:48.045 "data_offset": 0, 00:22:48.045 "data_size": 0 00:22:48.045 } 00:22:48.045 ] 00:22:48.045 }' 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.045 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.303 [2024-11-27 04:44:35.886669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:48.303 [2024-11-27 04:44:35.886712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.303 [2024-11-27 04:44:35.894632] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:48.303 [2024-11-27 04:44:35.894684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:48.303 [2024-11-27 04:44:35.894700] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:48.303 [2024-11-27 04:44:35.894719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.303 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.561 [2024-11-27 04:44:35.939696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:48.561 BaseBdev1 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.561 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.561 [ 00:22:48.561 { 00:22:48.561 "name": "BaseBdev1", 00:22:48.561 "aliases": [ 00:22:48.561 "6c7d4658-9365-4334-89dc-8d2aac47a944" 00:22:48.561 ], 00:22:48.561 "product_name": "Malloc disk", 00:22:48.561 "block_size": 4096, 00:22:48.561 "num_blocks": 8192, 00:22:48.561 "uuid": "6c7d4658-9365-4334-89dc-8d2aac47a944", 00:22:48.561 "assigned_rate_limits": { 00:22:48.561 "rw_ios_per_sec": 0, 00:22:48.561 "rw_mbytes_per_sec": 0, 00:22:48.561 "r_mbytes_per_sec": 0, 00:22:48.561 "w_mbytes_per_sec": 0 00:22:48.561 }, 00:22:48.561 "claimed": true, 00:22:48.561 "claim_type": "exclusive_write", 00:22:48.561 "zoned": false, 00:22:48.561 "supported_io_types": { 00:22:48.561 "read": true, 00:22:48.561 "write": true, 00:22:48.561 "unmap": true, 00:22:48.561 "flush": true, 00:22:48.561 "reset": true, 00:22:48.561 "nvme_admin": false, 00:22:48.561 "nvme_io": false, 00:22:48.561 "nvme_io_md": false, 00:22:48.561 "write_zeroes": true, 00:22:48.561 "zcopy": true, 00:22:48.561 "get_zone_info": false, 00:22:48.561 "zone_management": false, 00:22:48.561 "zone_append": false, 00:22:48.561 "compare": false, 00:22:48.561 "compare_and_write": false, 00:22:48.561 "abort": true, 00:22:48.561 "seek_hole": false, 00:22:48.562 "seek_data": false, 00:22:48.562 "copy": true, 00:22:48.562 "nvme_iov_md": false 00:22:48.562 }, 00:22:48.562 "memory_domains": [ 00:22:48.562 { 00:22:48.562 "dma_device_id": "system", 00:22:48.562 "dma_device_type": 1 00:22:48.562 }, 00:22:48.562 { 00:22:48.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.562 "dma_device_type": 2 00:22:48.562 } 00:22:48.562 ], 00:22:48.562 "driver_specific": {} 00:22:48.562 } 00:22:48.562 ] 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.562 04:44:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.562 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.562 "name": "Existed_Raid", 00:22:48.562 "uuid": "2cf4cf0a-f142-4328-a1d5-d49aa622fe71", 00:22:48.562 "strip_size_kb": 0, 00:22:48.562 "state": "configuring", 00:22:48.562 "raid_level": "raid1", 00:22:48.562 "superblock": true, 00:22:48.562 "num_base_bdevs": 2, 00:22:48.562 "num_base_bdevs_discovered": 1, 00:22:48.562 "num_base_bdevs_operational": 2, 00:22:48.562 "base_bdevs_list": [ 00:22:48.562 { 00:22:48.562 "name": "BaseBdev1", 00:22:48.562 "uuid": "6c7d4658-9365-4334-89dc-8d2aac47a944", 00:22:48.562 "is_configured": true, 00:22:48.562 "data_offset": 256, 00:22:48.562 "data_size": 7936 00:22:48.562 }, 00:22:48.562 { 00:22:48.562 "name": "BaseBdev2", 00:22:48.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.562 "is_configured": false, 00:22:48.562 "data_offset": 0, 00:22:48.562 "data_size": 0 00:22:48.562 } 00:22:48.562 ] 00:22:48.562 }' 00:22:48.562 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.562 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.820 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:48.820 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.820 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.078 [2024-11-27 04:44:36.443894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:49.078 [2024-11-27 04:44:36.443973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.078 [2024-11-27 04:44:36.451950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.078 [2024-11-27 04:44:36.454459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:49.078 [2024-11-27 04:44:36.454513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.078 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.079 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.079 "name": "Existed_Raid", 00:22:49.079 "uuid": "ea808254-225d-4c3c-a96c-14fe131289db", 00:22:49.079 "strip_size_kb": 0, 00:22:49.079 "state": "configuring", 00:22:49.079 "raid_level": "raid1", 00:22:49.079 "superblock": true, 00:22:49.079 "num_base_bdevs": 2, 00:22:49.079 "num_base_bdevs_discovered": 1, 00:22:49.079 "num_base_bdevs_operational": 2, 00:22:49.079 "base_bdevs_list": [ 00:22:49.079 { 00:22:49.079 "name": "BaseBdev1", 00:22:49.079 "uuid": "6c7d4658-9365-4334-89dc-8d2aac47a944", 00:22:49.079 "is_configured": true, 00:22:49.079 "data_offset": 256, 00:22:49.079 "data_size": 7936 00:22:49.079 }, 00:22:49.079 { 00:22:49.079 "name": "BaseBdev2", 00:22:49.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.079 "is_configured": false, 00:22:49.079 "data_offset": 0, 00:22:49.079 "data_size": 0 00:22:49.079 } 00:22:49.079 ] 00:22:49.079 }' 00:22:49.079 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.079 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.645 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:22:49.645 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.645 04:44:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.645 [2024-11-27 04:44:37.034929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.645 [2024-11-27 04:44:37.035251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:49.645 [2024-11-27 04:44:37.035272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:49.645 BaseBdev2 00:22:49.645 [2024-11-27 04:44:37.035603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:49.645 [2024-11-27 04:44:37.035874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:49.645 [2024-11-27 04:44:37.035909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:49.645 [2024-11-27 04:44:37.036091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.645 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.645 [ 00:22:49.645 { 00:22:49.645 "name": "BaseBdev2", 00:22:49.645 "aliases": [ 00:22:49.645 "180e05d5-a5f9-45c0-8e0e-c5cf07af4183" 00:22:49.645 ], 00:22:49.645 "product_name": "Malloc disk", 00:22:49.645 "block_size": 4096, 00:22:49.645 "num_blocks": 8192, 00:22:49.645 "uuid": "180e05d5-a5f9-45c0-8e0e-c5cf07af4183", 00:22:49.645 "assigned_rate_limits": { 00:22:49.645 "rw_ios_per_sec": 0, 00:22:49.645 "rw_mbytes_per_sec": 0, 00:22:49.645 "r_mbytes_per_sec": 0, 00:22:49.645 "w_mbytes_per_sec": 0 00:22:49.645 }, 00:22:49.645 "claimed": true, 00:22:49.645 "claim_type": "exclusive_write", 00:22:49.645 "zoned": false, 00:22:49.645 "supported_io_types": { 00:22:49.645 "read": true, 00:22:49.645 "write": true, 00:22:49.645 "unmap": true, 00:22:49.645 "flush": true, 00:22:49.645 "reset": true, 00:22:49.645 "nvme_admin": false, 00:22:49.645 "nvme_io": false, 00:22:49.645 "nvme_io_md": false, 00:22:49.645 "write_zeroes": true, 00:22:49.645 "zcopy": true, 00:22:49.645 "get_zone_info": false, 00:22:49.645 "zone_management": false, 00:22:49.645 "zone_append": false, 00:22:49.646 "compare": false, 00:22:49.646 "compare_and_write": false, 00:22:49.646 "abort": true, 00:22:49.646 "seek_hole": false, 00:22:49.646 "seek_data": false, 00:22:49.646 "copy": true, 00:22:49.646 "nvme_iov_md": false 00:22:49.646 }, 00:22:49.646 "memory_domains": [ 00:22:49.646 { 00:22:49.646 "dma_device_id": "system", 00:22:49.646 "dma_device_type": 1 00:22:49.646 }, 00:22:49.646 { 00:22:49.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.646 "dma_device_type": 2 00:22:49.646 } 00:22:49.646 ], 00:22:49.646 "driver_specific": {} 00:22:49.646 } 00:22:49.646 ] 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.646 "name": "Existed_Raid", 00:22:49.646 "uuid": "ea808254-225d-4c3c-a96c-14fe131289db", 00:22:49.646 "strip_size_kb": 0, 00:22:49.646 "state": "online", 00:22:49.646 "raid_level": "raid1", 00:22:49.646 "superblock": true, 00:22:49.646 "num_base_bdevs": 2, 00:22:49.646 "num_base_bdevs_discovered": 2, 00:22:49.646 "num_base_bdevs_operational": 2, 00:22:49.646 "base_bdevs_list": [ 00:22:49.646 { 00:22:49.646 "name": "BaseBdev1", 00:22:49.646 "uuid": "6c7d4658-9365-4334-89dc-8d2aac47a944", 00:22:49.646 "is_configured": true, 00:22:49.646 "data_offset": 256, 00:22:49.646 "data_size": 7936 00:22:49.646 }, 00:22:49.646 { 00:22:49.646 "name": "BaseBdev2", 00:22:49.646 "uuid": "180e05d5-a5f9-45c0-8e0e-c5cf07af4183", 00:22:49.646 "is_configured": true, 00:22:49.646 "data_offset": 256, 00:22:49.646 "data_size": 7936 00:22:49.646 } 00:22:49.646 ] 00:22:49.646 }' 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.646 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:50.213 [2024-11-27 04:44:37.535397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:50.213 "name": "Existed_Raid", 00:22:50.213 "aliases": [ 00:22:50.213 "ea808254-225d-4c3c-a96c-14fe131289db" 00:22:50.213 ], 00:22:50.213 "product_name": "Raid Volume", 00:22:50.213 "block_size": 4096, 00:22:50.213 "num_blocks": 7936, 00:22:50.213 "uuid": "ea808254-225d-4c3c-a96c-14fe131289db", 00:22:50.213 "assigned_rate_limits": { 00:22:50.213 "rw_ios_per_sec": 0, 00:22:50.213 "rw_mbytes_per_sec": 0, 00:22:50.213 "r_mbytes_per_sec": 0, 00:22:50.213 "w_mbytes_per_sec": 0 00:22:50.213 }, 00:22:50.213 "claimed": false, 00:22:50.213 "zoned": false, 00:22:50.213 "supported_io_types": { 00:22:50.213 "read": true, 00:22:50.213 "write": true, 00:22:50.213 "unmap": false, 00:22:50.213 "flush": false, 00:22:50.213 "reset": true, 00:22:50.213 "nvme_admin": false, 00:22:50.213 "nvme_io": false, 00:22:50.213 "nvme_io_md": false, 00:22:50.213 "write_zeroes": true, 00:22:50.213 "zcopy": false, 00:22:50.213 "get_zone_info": false, 00:22:50.213 "zone_management": false, 00:22:50.213 "zone_append": false, 00:22:50.213 "compare": false, 00:22:50.213 "compare_and_write": false, 00:22:50.213 "abort": false, 00:22:50.213 "seek_hole": false, 00:22:50.213 "seek_data": false, 00:22:50.213 "copy": false, 00:22:50.213 "nvme_iov_md": false 00:22:50.213 }, 00:22:50.213 "memory_domains": [ 00:22:50.213 { 00:22:50.213 "dma_device_id": "system", 00:22:50.213 "dma_device_type": 1 00:22:50.213 }, 00:22:50.213 { 00:22:50.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.213 "dma_device_type": 2 00:22:50.213 }, 00:22:50.213 { 00:22:50.213 "dma_device_id": "system", 00:22:50.213 "dma_device_type": 1 00:22:50.213 }, 00:22:50.213 { 00:22:50.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.213 "dma_device_type": 2 00:22:50.213 } 00:22:50.213 ], 00:22:50.213 "driver_specific": { 00:22:50.213 "raid": { 00:22:50.213 "uuid": "ea808254-225d-4c3c-a96c-14fe131289db", 00:22:50.213 "strip_size_kb": 0, 00:22:50.213 "state": "online", 00:22:50.213 "raid_level": "raid1", 00:22:50.213 "superblock": true, 00:22:50.213 "num_base_bdevs": 2, 00:22:50.213 "num_base_bdevs_discovered": 2, 00:22:50.213 "num_base_bdevs_operational": 2, 00:22:50.213 "base_bdevs_list": [ 00:22:50.213 { 00:22:50.213 "name": "BaseBdev1", 00:22:50.213 "uuid": "6c7d4658-9365-4334-89dc-8d2aac47a944", 00:22:50.213 "is_configured": true, 00:22:50.213 "data_offset": 256, 00:22:50.213 "data_size": 7936 00:22:50.213 }, 00:22:50.213 { 00:22:50.213 "name": "BaseBdev2", 00:22:50.213 "uuid": "180e05d5-a5f9-45c0-8e0e-c5cf07af4183", 00:22:50.213 "is_configured": true, 00:22:50.213 "data_offset": 256, 00:22:50.213 "data_size": 7936 00:22:50.213 } 00:22:50.213 ] 00:22:50.213 } 00:22:50.213 } 00:22:50.213 }' 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:50.213 BaseBdev2' 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.213 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.213 [2024-11-27 04:44:37.759158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.472 "name": "Existed_Raid", 00:22:50.472 "uuid": "ea808254-225d-4c3c-a96c-14fe131289db", 00:22:50.472 "strip_size_kb": 0, 00:22:50.472 "state": "online", 00:22:50.472 "raid_level": "raid1", 00:22:50.472 "superblock": true, 00:22:50.472 "num_base_bdevs": 2, 00:22:50.472 "num_base_bdevs_discovered": 1, 00:22:50.472 "num_base_bdevs_operational": 1, 00:22:50.472 "base_bdevs_list": [ 00:22:50.472 { 00:22:50.472 "name": null, 00:22:50.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.472 "is_configured": false, 00:22:50.472 "data_offset": 0, 00:22:50.472 "data_size": 7936 00:22:50.472 }, 00:22:50.472 { 00:22:50.472 "name": "BaseBdev2", 00:22:50.472 "uuid": "180e05d5-a5f9-45c0-8e0e-c5cf07af4183", 00:22:50.472 "is_configured": true, 00:22:50.472 "data_offset": 256, 00:22:50.472 "data_size": 7936 00:22:50.472 } 00:22:50.472 ] 00:22:50.472 }' 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.472 04:44:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.040 [2024-11-27 04:44:38.434416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:51.040 [2024-11-27 04:44:38.434549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:51.040 [2024-11-27 04:44:38.520490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:51.040 [2024-11-27 04:44:38.520584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:51.040 [2024-11-27 04:44:38.520606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86492 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86492 ']' 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86492 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86492 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:51.040 killing process with pid 86492 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86492' 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86492 00:22:51.040 [2024-11-27 04:44:38.609944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:51.040 04:44:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86492 00:22:51.040 [2024-11-27 04:44:38.624704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:52.417 04:44:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:22:52.417 00:22:52.417 real 0m5.338s 00:22:52.417 user 0m8.020s 00:22:52.417 sys 0m0.789s 00:22:52.417 04:44:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.417 ************************************ 00:22:52.417 END TEST raid_state_function_test_sb_4k 00:22:52.417 ************************************ 00:22:52.417 04:44:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.417 04:44:39 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:22:52.417 04:44:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:52.417 04:44:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.417 04:44:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:52.417 ************************************ 00:22:52.417 START TEST raid_superblock_test_4k 00:22:52.417 ************************************ 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86740 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86740 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86740 ']' 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.417 04:44:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.417 [2024-11-27 04:44:39.873039] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:52.417 [2024-11-27 04:44:39.873761] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86740 ] 00:22:52.676 [2024-11-27 04:44:40.063030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.676 [2024-11-27 04:44:40.217169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.935 [2024-11-27 04:44:40.426842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:52.935 [2024-11-27 04:44:40.426921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:53.193 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.194 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.453 malloc1 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.453 [2024-11-27 04:44:40.857742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:53.453 [2024-11-27 04:44:40.857844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.453 [2024-11-27 04:44:40.857879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:53.453 [2024-11-27 04:44:40.857894] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.453 [2024-11-27 04:44:40.860812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.453 [2024-11-27 04:44:40.860855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:53.453 pt1 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.453 malloc2 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.453 [2024-11-27 04:44:40.909789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:53.453 [2024-11-27 04:44:40.909863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.453 [2024-11-27 04:44:40.909900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:53.453 [2024-11-27 04:44:40.909914] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.453 [2024-11-27 04:44:40.912737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.453 [2024-11-27 04:44:40.912799] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:53.453 pt2 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.453 [2024-11-27 04:44:40.917849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:53.453 [2024-11-27 04:44:40.920248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:53.453 [2024-11-27 04:44:40.920485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:53.453 [2024-11-27 04:44:40.920509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:53.453 [2024-11-27 04:44:40.920852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:53.453 [2024-11-27 04:44:40.921118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:53.453 [2024-11-27 04:44:40.921171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:53.453 [2024-11-27 04:44:40.921370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.453 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.454 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.454 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.454 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.454 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.454 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.454 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.454 "name": "raid_bdev1", 00:22:53.454 "uuid": "20f3ce98-ad8d-4214-919f-2f7aaff742ca", 00:22:53.454 "strip_size_kb": 0, 00:22:53.454 "state": "online", 00:22:53.454 "raid_level": "raid1", 00:22:53.454 "superblock": true, 00:22:53.454 "num_base_bdevs": 2, 00:22:53.454 "num_base_bdevs_discovered": 2, 00:22:53.454 "num_base_bdevs_operational": 2, 00:22:53.454 "base_bdevs_list": [ 00:22:53.454 { 00:22:53.454 "name": "pt1", 00:22:53.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:53.454 "is_configured": true, 00:22:53.454 "data_offset": 256, 00:22:53.454 "data_size": 7936 00:22:53.454 }, 00:22:53.454 { 00:22:53.454 "name": "pt2", 00:22:53.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:53.454 "is_configured": true, 00:22:53.454 "data_offset": 256, 00:22:53.454 "data_size": 7936 00:22:53.454 } 00:22:53.454 ] 00:22:53.454 }' 00:22:53.454 04:44:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.454 04:44:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.023 [2024-11-27 04:44:41.434318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.023 "name": "raid_bdev1", 00:22:54.023 "aliases": [ 00:22:54.023 "20f3ce98-ad8d-4214-919f-2f7aaff742ca" 00:22:54.023 ], 00:22:54.023 "product_name": "Raid Volume", 00:22:54.023 "block_size": 4096, 00:22:54.023 "num_blocks": 7936, 00:22:54.023 "uuid": "20f3ce98-ad8d-4214-919f-2f7aaff742ca", 00:22:54.023 "assigned_rate_limits": { 00:22:54.023 "rw_ios_per_sec": 0, 00:22:54.023 "rw_mbytes_per_sec": 0, 00:22:54.023 "r_mbytes_per_sec": 0, 00:22:54.023 "w_mbytes_per_sec": 0 00:22:54.023 }, 00:22:54.023 "claimed": false, 00:22:54.023 "zoned": false, 00:22:54.023 "supported_io_types": { 00:22:54.023 "read": true, 00:22:54.023 "write": true, 00:22:54.023 "unmap": false, 00:22:54.023 "flush": false, 00:22:54.023 "reset": true, 00:22:54.023 "nvme_admin": false, 00:22:54.023 "nvme_io": false, 00:22:54.023 "nvme_io_md": false, 00:22:54.023 "write_zeroes": true, 00:22:54.023 "zcopy": false, 00:22:54.023 "get_zone_info": false, 00:22:54.023 "zone_management": false, 00:22:54.023 "zone_append": false, 00:22:54.023 "compare": false, 00:22:54.023 "compare_and_write": false, 00:22:54.023 "abort": false, 00:22:54.023 "seek_hole": false, 00:22:54.023 "seek_data": false, 00:22:54.023 "copy": false, 00:22:54.023 "nvme_iov_md": false 00:22:54.023 }, 00:22:54.023 "memory_domains": [ 00:22:54.023 { 00:22:54.023 "dma_device_id": "system", 00:22:54.023 "dma_device_type": 1 00:22:54.023 }, 00:22:54.023 { 00:22:54.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.023 "dma_device_type": 2 00:22:54.023 }, 00:22:54.023 { 00:22:54.023 "dma_device_id": "system", 00:22:54.023 "dma_device_type": 1 00:22:54.023 }, 00:22:54.023 { 00:22:54.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.023 "dma_device_type": 2 00:22:54.023 } 00:22:54.023 ], 00:22:54.023 "driver_specific": { 00:22:54.023 "raid": { 00:22:54.023 "uuid": "20f3ce98-ad8d-4214-919f-2f7aaff742ca", 00:22:54.023 "strip_size_kb": 0, 00:22:54.023 "state": "online", 00:22:54.023 "raid_level": "raid1", 00:22:54.023 "superblock": true, 00:22:54.023 "num_base_bdevs": 2, 00:22:54.023 "num_base_bdevs_discovered": 2, 00:22:54.023 "num_base_bdevs_operational": 2, 00:22:54.023 "base_bdevs_list": [ 00:22:54.023 { 00:22:54.023 "name": "pt1", 00:22:54.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:54.023 "is_configured": true, 00:22:54.023 "data_offset": 256, 00:22:54.023 "data_size": 7936 00:22:54.023 }, 00:22:54.023 { 00:22:54.023 "name": "pt2", 00:22:54.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:54.023 "is_configured": true, 00:22:54.023 "data_offset": 256, 00:22:54.023 "data_size": 7936 00:22:54.023 } 00:22:54.023 ] 00:22:54.023 } 00:22:54.023 } 00:22:54.023 }' 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:54.023 pt2' 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.023 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.282 [2024-11-27 04:44:41.682362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=20f3ce98-ad8d-4214-919f-2f7aaff742ca 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 20f3ce98-ad8d-4214-919f-2f7aaff742ca ']' 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.282 [2024-11-27 04:44:41.725979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:54.282 [2024-11-27 04:44:41.726018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.282 [2024-11-27 04:44:41.726181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.282 [2024-11-27 04:44:41.726290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.282 [2024-11-27 04:44:41.726311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.282 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.283 [2024-11-27 04:44:41.874102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:54.283 [2024-11-27 04:44:41.876715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:54.283 [2024-11-27 04:44:41.876844] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:54.283 [2024-11-27 04:44:41.876927] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:54.283 [2024-11-27 04:44:41.876964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:54.283 [2024-11-27 04:44:41.876980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:54.283 request: 00:22:54.283 { 00:22:54.283 "name": "raid_bdev1", 00:22:54.283 "raid_level": "raid1", 00:22:54.283 "base_bdevs": [ 00:22:54.283 "malloc1", 00:22:54.283 "malloc2" 00:22:54.283 ], 00:22:54.283 "superblock": false, 00:22:54.283 "method": "bdev_raid_create", 00:22:54.283 "req_id": 1 00:22:54.283 } 00:22:54.283 Got JSON-RPC error response 00:22:54.283 response: 00:22:54.283 { 00:22:54.283 "code": -17, 00:22:54.283 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:54.283 } 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.283 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.541 [2024-11-27 04:44:41.938095] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:54.541 [2024-11-27 04:44:41.938334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.541 [2024-11-27 04:44:41.938526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:54.541 [2024-11-27 04:44:41.938660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.541 [2024-11-27 04:44:41.941654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.541 [2024-11-27 04:44:41.941830] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:54.541 [2024-11-27 04:44:41.942056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:54.541 [2024-11-27 04:44:41.942269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:54.541 pt1 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.541 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.542 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.542 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.542 "name": "raid_bdev1", 00:22:54.542 "uuid": "20f3ce98-ad8d-4214-919f-2f7aaff742ca", 00:22:54.542 "strip_size_kb": 0, 00:22:54.542 "state": "configuring", 00:22:54.542 "raid_level": "raid1", 00:22:54.542 "superblock": true, 00:22:54.542 "num_base_bdevs": 2, 00:22:54.542 "num_base_bdevs_discovered": 1, 00:22:54.542 "num_base_bdevs_operational": 2, 00:22:54.542 "base_bdevs_list": [ 00:22:54.542 { 00:22:54.542 "name": "pt1", 00:22:54.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:54.542 "is_configured": true, 00:22:54.542 "data_offset": 256, 00:22:54.542 "data_size": 7936 00:22:54.542 }, 00:22:54.542 { 00:22:54.542 "name": null, 00:22:54.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:54.542 "is_configured": false, 00:22:54.542 "data_offset": 256, 00:22:54.542 "data_size": 7936 00:22:54.542 } 00:22:54.542 ] 00:22:54.542 }' 00:22:54.542 04:44:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.542 04:44:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.107 [2024-11-27 04:44:42.466307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:55.107 [2024-11-27 04:44:42.466529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.107 [2024-11-27 04:44:42.466609] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:55.107 [2024-11-27 04:44:42.466853] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.107 [2024-11-27 04:44:42.467453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.107 [2024-11-27 04:44:42.467495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:55.107 [2024-11-27 04:44:42.467600] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:55.107 [2024-11-27 04:44:42.467641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:55.107 [2024-11-27 04:44:42.467812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:55.107 [2024-11-27 04:44:42.467834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:55.107 [2024-11-27 04:44:42.468141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:55.107 [2024-11-27 04:44:42.468384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:55.107 [2024-11-27 04:44:42.468405] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:55.107 [2024-11-27 04:44:42.468582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.107 pt2 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.107 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.108 04:44:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.108 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.108 "name": "raid_bdev1", 00:22:55.108 "uuid": "20f3ce98-ad8d-4214-919f-2f7aaff742ca", 00:22:55.108 "strip_size_kb": 0, 00:22:55.108 "state": "online", 00:22:55.108 "raid_level": "raid1", 00:22:55.108 "superblock": true, 00:22:55.108 "num_base_bdevs": 2, 00:22:55.108 "num_base_bdevs_discovered": 2, 00:22:55.108 "num_base_bdevs_operational": 2, 00:22:55.108 "base_bdevs_list": [ 00:22:55.108 { 00:22:55.108 "name": "pt1", 00:22:55.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:55.108 "is_configured": true, 00:22:55.108 "data_offset": 256, 00:22:55.108 "data_size": 7936 00:22:55.108 }, 00:22:55.108 { 00:22:55.108 "name": "pt2", 00:22:55.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:55.108 "is_configured": true, 00:22:55.108 "data_offset": 256, 00:22:55.108 "data_size": 7936 00:22:55.108 } 00:22:55.108 ] 00:22:55.108 }' 00:22:55.108 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.108 04:44:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.366 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:55.366 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:55.366 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:55.366 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:55.366 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:55.366 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:55.366 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:55.366 04:44:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.366 04:44:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.366 04:44:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:55.625 [2024-11-27 04:44:42.986738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:55.625 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.625 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:55.625 "name": "raid_bdev1", 00:22:55.625 "aliases": [ 00:22:55.625 "20f3ce98-ad8d-4214-919f-2f7aaff742ca" 00:22:55.625 ], 00:22:55.625 "product_name": "Raid Volume", 00:22:55.625 "block_size": 4096, 00:22:55.625 "num_blocks": 7936, 00:22:55.625 "uuid": "20f3ce98-ad8d-4214-919f-2f7aaff742ca", 00:22:55.625 "assigned_rate_limits": { 00:22:55.625 "rw_ios_per_sec": 0, 00:22:55.625 "rw_mbytes_per_sec": 0, 00:22:55.625 "r_mbytes_per_sec": 0, 00:22:55.625 "w_mbytes_per_sec": 0 00:22:55.625 }, 00:22:55.625 "claimed": false, 00:22:55.625 "zoned": false, 00:22:55.625 "supported_io_types": { 00:22:55.625 "read": true, 00:22:55.625 "write": true, 00:22:55.625 "unmap": false, 00:22:55.625 "flush": false, 00:22:55.625 "reset": true, 00:22:55.625 "nvme_admin": false, 00:22:55.625 "nvme_io": false, 00:22:55.625 "nvme_io_md": false, 00:22:55.625 "write_zeroes": true, 00:22:55.625 "zcopy": false, 00:22:55.625 "get_zone_info": false, 00:22:55.625 "zone_management": false, 00:22:55.625 "zone_append": false, 00:22:55.625 "compare": false, 00:22:55.625 "compare_and_write": false, 00:22:55.625 "abort": false, 00:22:55.625 "seek_hole": false, 00:22:55.625 "seek_data": false, 00:22:55.625 "copy": false, 00:22:55.625 "nvme_iov_md": false 00:22:55.625 }, 00:22:55.625 "memory_domains": [ 00:22:55.625 { 00:22:55.625 "dma_device_id": "system", 00:22:55.625 "dma_device_type": 1 00:22:55.625 }, 00:22:55.625 { 00:22:55.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.625 "dma_device_type": 2 00:22:55.625 }, 00:22:55.625 { 00:22:55.625 "dma_device_id": "system", 00:22:55.625 "dma_device_type": 1 00:22:55.625 }, 00:22:55.625 { 00:22:55.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.625 "dma_device_type": 2 00:22:55.625 } 00:22:55.625 ], 00:22:55.625 "driver_specific": { 00:22:55.625 "raid": { 00:22:55.625 "uuid": "20f3ce98-ad8d-4214-919f-2f7aaff742ca", 00:22:55.625 "strip_size_kb": 0, 00:22:55.625 "state": "online", 00:22:55.625 "raid_level": "raid1", 00:22:55.625 "superblock": true, 00:22:55.625 "num_base_bdevs": 2, 00:22:55.625 "num_base_bdevs_discovered": 2, 00:22:55.625 "num_base_bdevs_operational": 2, 00:22:55.625 "base_bdevs_list": [ 00:22:55.625 { 00:22:55.625 "name": "pt1", 00:22:55.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:55.625 "is_configured": true, 00:22:55.625 "data_offset": 256, 00:22:55.625 "data_size": 7936 00:22:55.625 }, 00:22:55.625 { 00:22:55.625 "name": "pt2", 00:22:55.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:55.625 "is_configured": true, 00:22:55.625 "data_offset": 256, 00:22:55.625 "data_size": 7936 00:22:55.625 } 00:22:55.625 ] 00:22:55.625 } 00:22:55.625 } 00:22:55.625 }' 00:22:55.625 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:55.625 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:55.625 pt2' 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.626 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.626 [2024-11-27 04:44:43.242811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:55.884 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.884 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 20f3ce98-ad8d-4214-919f-2f7aaff742ca '!=' 20f3ce98-ad8d-4214-919f-2f7aaff742ca ']' 00:22:55.884 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:55.884 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:55.884 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.885 [2024-11-27 04:44:43.282531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.885 "name": "raid_bdev1", 00:22:55.885 "uuid": "20f3ce98-ad8d-4214-919f-2f7aaff742ca", 00:22:55.885 "strip_size_kb": 0, 00:22:55.885 "state": "online", 00:22:55.885 "raid_level": "raid1", 00:22:55.885 "superblock": true, 00:22:55.885 "num_base_bdevs": 2, 00:22:55.885 "num_base_bdevs_discovered": 1, 00:22:55.885 "num_base_bdevs_operational": 1, 00:22:55.885 "base_bdevs_list": [ 00:22:55.885 { 00:22:55.885 "name": null, 00:22:55.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.885 "is_configured": false, 00:22:55.885 "data_offset": 0, 00:22:55.885 "data_size": 7936 00:22:55.885 }, 00:22:55.885 { 00:22:55.885 "name": "pt2", 00:22:55.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:55.885 "is_configured": true, 00:22:55.885 "data_offset": 256, 00:22:55.885 "data_size": 7936 00:22:55.885 } 00:22:55.885 ] 00:22:55.885 }' 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.885 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.453 [2024-11-27 04:44:43.794767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:56.453 [2024-11-27 04:44:43.794876] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:56.453 [2024-11-27 04:44:43.795021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:56.453 [2024-11-27 04:44:43.795110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:56.453 [2024-11-27 04:44:43.795136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.453 [2024-11-27 04:44:43.870707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:56.453 [2024-11-27 04:44:43.870994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.453 [2024-11-27 04:44:43.871080] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:56.453 [2024-11-27 04:44:43.871382] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.453 [2024-11-27 04:44:43.874734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.453 [2024-11-27 04:44:43.874958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:56.453 [2024-11-27 04:44:43.875220] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:56.453 [2024-11-27 04:44:43.875426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:56.453 [2024-11-27 04:44:43.875783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:56.453 pt2 00:22:56.453 [2024-11-27 04:44:43.875931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:56.453 [2024-11-27 04:44:43.876280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:56.453 [2024-11-27 04:44:43.876531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:56.453 [2024-11-27 04:44:43.876552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:56.453 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:56.453 [2024-11-27 04:44:43.876765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:56.454 "name": "raid_bdev1", 00:22:56.454 "uuid": "20f3ce98-ad8d-4214-919f-2f7aaff742ca", 00:22:56.454 "strip_size_kb": 0, 00:22:56.454 "state": "online", 00:22:56.454 "raid_level": "raid1", 00:22:56.454 "superblock": true, 00:22:56.454 "num_base_bdevs": 2, 00:22:56.454 "num_base_bdevs_discovered": 1, 00:22:56.454 "num_base_bdevs_operational": 1, 00:22:56.454 "base_bdevs_list": [ 00:22:56.454 { 00:22:56.454 "name": null, 00:22:56.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.454 "is_configured": false, 00:22:56.454 "data_offset": 256, 00:22:56.454 "data_size": 7936 00:22:56.454 }, 00:22:56.454 { 00:22:56.454 "name": "pt2", 00:22:56.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:56.454 "is_configured": true, 00:22:56.454 "data_offset": 256, 00:22:56.454 "data_size": 7936 00:22:56.454 } 00:22:56.454 ] 00:22:56.454 }' 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:56.454 04:44:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 [2024-11-27 04:44:44.415434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:57.064 [2024-11-27 04:44:44.415693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:57.064 [2024-11-27 04:44:44.415862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.064 [2024-11-27 04:44:44.415962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:57.064 [2024-11-27 04:44:44.415983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 [2024-11-27 04:44:44.479498] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:57.064 [2024-11-27 04:44:44.479829] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.064 [2024-11-27 04:44:44.479933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:57.064 [2024-11-27 04:44:44.480173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.064 [2024-11-27 04:44:44.483802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.064 [2024-11-27 04:44:44.484003] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:57.064 [2024-11-27 04:44:44.484277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:57.064 pt1 00:22:57.064 [2024-11-27 04:44:44.484471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:57.064 [2024-11-27 04:44:44.484792] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:57.064 [2024-11-27 04:44:44.484815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:57.064 [2024-11-27 04:44:44.484846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:57.064 [2024-11-27 04:44:44.484930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:57.064 [2024-11-27 04:44:44.485102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:57.064 [2024-11-27 04:44:44.485122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:57.064 [2024-11-27 04:44:44.485478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.064 [2024-11-27 04:44:44.485740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.064 [2024-11-27 04:44:44.485768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:57.064 [2024-11-27 04:44:44.486006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.064 "name": "raid_bdev1", 00:22:57.064 "uuid": "20f3ce98-ad8d-4214-919f-2f7aaff742ca", 00:22:57.064 "strip_size_kb": 0, 00:22:57.064 "state": "online", 00:22:57.064 "raid_level": "raid1", 00:22:57.064 "superblock": true, 00:22:57.064 "num_base_bdevs": 2, 00:22:57.064 "num_base_bdevs_discovered": 1, 00:22:57.064 "num_base_bdevs_operational": 1, 00:22:57.064 "base_bdevs_list": [ 00:22:57.064 { 00:22:57.064 "name": null, 00:22:57.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.064 "is_configured": false, 00:22:57.064 "data_offset": 256, 00:22:57.064 "data_size": 7936 00:22:57.064 }, 00:22:57.064 { 00:22:57.064 "name": "pt2", 00:22:57.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.064 "is_configured": true, 00:22:57.064 "data_offset": 256, 00:22:57.064 "data_size": 7936 00:22:57.064 } 00:22:57.064 ] 00:22:57.064 }' 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.064 04:44:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.631 [2024-11-27 04:44:45.064917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 20f3ce98-ad8d-4214-919f-2f7aaff742ca '!=' 20f3ce98-ad8d-4214-919f-2f7aaff742ca ']' 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86740 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86740 ']' 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86740 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86740 00:22:57.631 killing process with pid 86740 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86740' 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86740 00:22:57.631 [2024-11-27 04:44:45.139920] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:57.631 04:44:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86740 00:22:57.631 [2024-11-27 04:44:45.140118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.631 [2024-11-27 04:44:45.140206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:57.631 [2024-11-27 04:44:45.140236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:57.890 [2024-11-27 04:44:45.348228] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:59.271 ************************************ 00:22:59.271 END TEST raid_superblock_test_4k 00:22:59.271 ************************************ 00:22:59.271 04:44:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:22:59.271 00:22:59.271 real 0m6.793s 00:22:59.271 user 0m10.627s 00:22:59.271 sys 0m0.999s 00:22:59.272 04:44:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.272 04:44:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.272 04:44:46 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:22:59.272 04:44:46 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:22:59.272 04:44:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:59.272 04:44:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.272 04:44:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:59.272 ************************************ 00:22:59.272 START TEST raid_rebuild_test_sb_4k 00:22:59.272 ************************************ 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87074 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87074 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87074 ']' 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.272 04:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.272 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:59.272 Zero copy mechanism will not be used. 00:22:59.272 [2024-11-27 04:44:46.690444] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:59.272 [2024-11-27 04:44:46.690617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87074 ] 00:22:59.272 [2024-11-27 04:44:46.881870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.530 [2024-11-27 04:44:47.042118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.788 [2024-11-27 04:44:47.250027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.788 [2024-11-27 04:44:47.250105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.353 BaseBdev1_malloc 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.353 [2024-11-27 04:44:47.737619] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:00.353 [2024-11-27 04:44:47.737850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.353 [2024-11-27 04:44:47.737931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:00.353 [2024-11-27 04:44:47.738127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.353 [2024-11-27 04:44:47.740895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.353 [2024-11-27 04:44:47.740950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:00.353 BaseBdev1 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.353 BaseBdev2_malloc 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.353 [2024-11-27 04:44:47.790591] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:00.353 [2024-11-27 04:44:47.791652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.353 [2024-11-27 04:44:47.791697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:00.353 [2024-11-27 04:44:47.791717] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.353 BaseBdev2 00:23:00.353 [2024-11-27 04:44:47.794526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.353 [2024-11-27 04:44:47.794579] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.353 spare_malloc 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.353 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.353 spare_delay 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.354 [2024-11-27 04:44:47.857689] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:00.354 [2024-11-27 04:44:47.857912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.354 [2024-11-27 04:44:47.857991] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:00.354 [2024-11-27 04:44:47.858108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.354 [2024-11-27 04:44:47.860977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.354 [2024-11-27 04:44:47.861030] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:00.354 spare 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.354 [2024-11-27 04:44:47.865982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:00.354 [2024-11-27 04:44:47.868741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:00.354 [2024-11-27 04:44:47.869150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:00.354 [2024-11-27 04:44:47.869306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:00.354 [2024-11-27 04:44:47.869687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:00.354 [2024-11-27 04:44:47.870084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:00.354 [2024-11-27 04:44:47.870232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:00.354 [2024-11-27 04:44:47.870605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.354 "name": "raid_bdev1", 00:23:00.354 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:00.354 "strip_size_kb": 0, 00:23:00.354 "state": "online", 00:23:00.354 "raid_level": "raid1", 00:23:00.354 "superblock": true, 00:23:00.354 "num_base_bdevs": 2, 00:23:00.354 "num_base_bdevs_discovered": 2, 00:23:00.354 "num_base_bdevs_operational": 2, 00:23:00.354 "base_bdevs_list": [ 00:23:00.354 { 00:23:00.354 "name": "BaseBdev1", 00:23:00.354 "uuid": "ed584ccd-b4fe-5d43-80cf-72affa75b2c5", 00:23:00.354 "is_configured": true, 00:23:00.354 "data_offset": 256, 00:23:00.354 "data_size": 7936 00:23:00.354 }, 00:23:00.354 { 00:23:00.354 "name": "BaseBdev2", 00:23:00.354 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:00.354 "is_configured": true, 00:23:00.354 "data_offset": 256, 00:23:00.354 "data_size": 7936 00:23:00.354 } 00:23:00.354 ] 00:23:00.354 }' 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.354 04:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:00.922 [2024-11-27 04:44:48.363082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:00.922 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:01.181 [2024-11-27 04:44:48.694862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:01.181 /dev/nbd0 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:01.181 1+0 records in 00:23:01.181 1+0 records out 00:23:01.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567257 s, 7.2 MB/s 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:01.181 04:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:23:02.125 7936+0 records in 00:23:02.125 7936+0 records out 00:23:02.125 32505856 bytes (33 MB, 31 MiB) copied, 0.946366 s, 34.3 MB/s 00:23:02.125 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:02.125 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:02.125 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:02.125 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:02.125 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:23:02.125 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.125 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:02.384 [2024-11-27 04:44:49.973359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:02.384 [2024-11-27 04:44:49.986165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:02.384 04:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.642 04:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.642 04:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.642 "name": "raid_bdev1", 00:23:02.642 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:02.642 "strip_size_kb": 0, 00:23:02.642 "state": "online", 00:23:02.642 "raid_level": "raid1", 00:23:02.642 "superblock": true, 00:23:02.642 "num_base_bdevs": 2, 00:23:02.642 "num_base_bdevs_discovered": 1, 00:23:02.642 "num_base_bdevs_operational": 1, 00:23:02.642 "base_bdevs_list": [ 00:23:02.642 { 00:23:02.642 "name": null, 00:23:02.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.642 "is_configured": false, 00:23:02.642 "data_offset": 0, 00:23:02.642 "data_size": 7936 00:23:02.642 }, 00:23:02.642 { 00:23:02.642 "name": "BaseBdev2", 00:23:02.642 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:02.642 "is_configured": true, 00:23:02.642 "data_offset": 256, 00:23:02.642 "data_size": 7936 00:23:02.642 } 00:23:02.642 ] 00:23:02.642 }' 00:23:02.642 04:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.642 04:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:02.900 04:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:02.900 04:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.900 04:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:02.900 [2024-11-27 04:44:50.498348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:02.900 [2024-11-27 04:44:50.515622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:23:02.900 04:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.900 04:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:02.900 [2024-11-27 04:44:50.518230] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.275 "name": "raid_bdev1", 00:23:04.275 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:04.275 "strip_size_kb": 0, 00:23:04.275 "state": "online", 00:23:04.275 "raid_level": "raid1", 00:23:04.275 "superblock": true, 00:23:04.275 "num_base_bdevs": 2, 00:23:04.275 "num_base_bdevs_discovered": 2, 00:23:04.275 "num_base_bdevs_operational": 2, 00:23:04.275 "process": { 00:23:04.275 "type": "rebuild", 00:23:04.275 "target": "spare", 00:23:04.275 "progress": { 00:23:04.275 "blocks": 2560, 00:23:04.275 "percent": 32 00:23:04.275 } 00:23:04.275 }, 00:23:04.275 "base_bdevs_list": [ 00:23:04.275 { 00:23:04.275 "name": "spare", 00:23:04.275 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:04.275 "is_configured": true, 00:23:04.275 "data_offset": 256, 00:23:04.275 "data_size": 7936 00:23:04.275 }, 00:23:04.275 { 00:23:04.275 "name": "BaseBdev2", 00:23:04.275 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:04.275 "is_configured": true, 00:23:04.275 "data_offset": 256, 00:23:04.275 "data_size": 7936 00:23:04.275 } 00:23:04.275 ] 00:23:04.275 }' 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.275 [2024-11-27 04:44:51.663613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:04.275 [2024-11-27 04:44:51.727647] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:04.275 [2024-11-27 04:44:51.727923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.275 [2024-11-27 04:44:51.727953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:04.275 [2024-11-27 04:44:51.727971] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.275 "name": "raid_bdev1", 00:23:04.275 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:04.275 "strip_size_kb": 0, 00:23:04.275 "state": "online", 00:23:04.275 "raid_level": "raid1", 00:23:04.275 "superblock": true, 00:23:04.275 "num_base_bdevs": 2, 00:23:04.275 "num_base_bdevs_discovered": 1, 00:23:04.275 "num_base_bdevs_operational": 1, 00:23:04.275 "base_bdevs_list": [ 00:23:04.275 { 00:23:04.275 "name": null, 00:23:04.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.275 "is_configured": false, 00:23:04.275 "data_offset": 0, 00:23:04.275 "data_size": 7936 00:23:04.275 }, 00:23:04.275 { 00:23:04.275 "name": "BaseBdev2", 00:23:04.275 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:04.275 "is_configured": true, 00:23:04.275 "data_offset": 256, 00:23:04.275 "data_size": 7936 00:23:04.275 } 00:23:04.275 ] 00:23:04.275 }' 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.275 04:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.843 "name": "raid_bdev1", 00:23:04.843 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:04.843 "strip_size_kb": 0, 00:23:04.843 "state": "online", 00:23:04.843 "raid_level": "raid1", 00:23:04.843 "superblock": true, 00:23:04.843 "num_base_bdevs": 2, 00:23:04.843 "num_base_bdevs_discovered": 1, 00:23:04.843 "num_base_bdevs_operational": 1, 00:23:04.843 "base_bdevs_list": [ 00:23:04.843 { 00:23:04.843 "name": null, 00:23:04.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.843 "is_configured": false, 00:23:04.843 "data_offset": 0, 00:23:04.843 "data_size": 7936 00:23:04.843 }, 00:23:04.843 { 00:23:04.843 "name": "BaseBdev2", 00:23:04.843 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:04.843 "is_configured": true, 00:23:04.843 "data_offset": 256, 00:23:04.843 "data_size": 7936 00:23:04.843 } 00:23:04.843 ] 00:23:04.843 }' 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.843 [2024-11-27 04:44:52.421432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:04.843 [2024-11-27 04:44:52.439750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.843 04:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:04.843 [2024-11-27 04:44:52.442337] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.218 "name": "raid_bdev1", 00:23:06.218 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:06.218 "strip_size_kb": 0, 00:23:06.218 "state": "online", 00:23:06.218 "raid_level": "raid1", 00:23:06.218 "superblock": true, 00:23:06.218 "num_base_bdevs": 2, 00:23:06.218 "num_base_bdevs_discovered": 2, 00:23:06.218 "num_base_bdevs_operational": 2, 00:23:06.218 "process": { 00:23:06.218 "type": "rebuild", 00:23:06.218 "target": "spare", 00:23:06.218 "progress": { 00:23:06.218 "blocks": 2560, 00:23:06.218 "percent": 32 00:23:06.218 } 00:23:06.218 }, 00:23:06.218 "base_bdevs_list": [ 00:23:06.218 { 00:23:06.218 "name": "spare", 00:23:06.218 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:06.218 "is_configured": true, 00:23:06.218 "data_offset": 256, 00:23:06.218 "data_size": 7936 00:23:06.218 }, 00:23:06.218 { 00:23:06.218 "name": "BaseBdev2", 00:23:06.218 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:06.218 "is_configured": true, 00:23:06.218 "data_offset": 256, 00:23:06.218 "data_size": 7936 00:23:06.218 } 00:23:06.218 ] 00:23:06.218 }' 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:06.218 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=734 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.218 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.218 "name": "raid_bdev1", 00:23:06.218 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:06.218 "strip_size_kb": 0, 00:23:06.218 "state": "online", 00:23:06.218 "raid_level": "raid1", 00:23:06.218 "superblock": true, 00:23:06.218 "num_base_bdevs": 2, 00:23:06.218 "num_base_bdevs_discovered": 2, 00:23:06.218 "num_base_bdevs_operational": 2, 00:23:06.218 "process": { 00:23:06.218 "type": "rebuild", 00:23:06.218 "target": "spare", 00:23:06.218 "progress": { 00:23:06.218 "blocks": 2816, 00:23:06.218 "percent": 35 00:23:06.218 } 00:23:06.218 }, 00:23:06.218 "base_bdevs_list": [ 00:23:06.218 { 00:23:06.218 "name": "spare", 00:23:06.218 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:06.218 "is_configured": true, 00:23:06.218 "data_offset": 256, 00:23:06.218 "data_size": 7936 00:23:06.219 }, 00:23:06.219 { 00:23:06.219 "name": "BaseBdev2", 00:23:06.219 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:06.219 "is_configured": true, 00:23:06.219 "data_offset": 256, 00:23:06.219 "data_size": 7936 00:23:06.219 } 00:23:06.219 ] 00:23:06.219 }' 00:23:06.219 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.219 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.219 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.219 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.219 04:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.224 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:07.224 "name": "raid_bdev1", 00:23:07.224 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:07.224 "strip_size_kb": 0, 00:23:07.224 "state": "online", 00:23:07.224 "raid_level": "raid1", 00:23:07.224 "superblock": true, 00:23:07.224 "num_base_bdevs": 2, 00:23:07.224 "num_base_bdevs_discovered": 2, 00:23:07.224 "num_base_bdevs_operational": 2, 00:23:07.224 "process": { 00:23:07.224 "type": "rebuild", 00:23:07.224 "target": "spare", 00:23:07.224 "progress": { 00:23:07.224 "blocks": 5888, 00:23:07.224 "percent": 74 00:23:07.224 } 00:23:07.224 }, 00:23:07.224 "base_bdevs_list": [ 00:23:07.224 { 00:23:07.224 "name": "spare", 00:23:07.224 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:07.224 "is_configured": true, 00:23:07.224 "data_offset": 256, 00:23:07.224 "data_size": 7936 00:23:07.225 }, 00:23:07.225 { 00:23:07.225 "name": "BaseBdev2", 00:23:07.225 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:07.225 "is_configured": true, 00:23:07.225 "data_offset": 256, 00:23:07.225 "data_size": 7936 00:23:07.225 } 00:23:07.225 ] 00:23:07.225 }' 00:23:07.225 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:07.483 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:07.483 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:07.483 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:07.483 04:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:08.050 [2024-11-27 04:44:55.565435] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:08.050 [2024-11-27 04:44:55.565746] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:08.050 [2024-11-27 04:44:55.565935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.618 "name": "raid_bdev1", 00:23:08.618 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:08.618 "strip_size_kb": 0, 00:23:08.618 "state": "online", 00:23:08.618 "raid_level": "raid1", 00:23:08.618 "superblock": true, 00:23:08.618 "num_base_bdevs": 2, 00:23:08.618 "num_base_bdevs_discovered": 2, 00:23:08.618 "num_base_bdevs_operational": 2, 00:23:08.618 "base_bdevs_list": [ 00:23:08.618 { 00:23:08.618 "name": "spare", 00:23:08.618 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:08.618 "is_configured": true, 00:23:08.618 "data_offset": 256, 00:23:08.618 "data_size": 7936 00:23:08.618 }, 00:23:08.618 { 00:23:08.618 "name": "BaseBdev2", 00:23:08.618 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:08.618 "is_configured": true, 00:23:08.618 "data_offset": 256, 00:23:08.618 "data_size": 7936 00:23:08.618 } 00:23:08.618 ] 00:23:08.618 }' 00:23:08.618 04:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.618 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.619 "name": "raid_bdev1", 00:23:08.619 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:08.619 "strip_size_kb": 0, 00:23:08.619 "state": "online", 00:23:08.619 "raid_level": "raid1", 00:23:08.619 "superblock": true, 00:23:08.619 "num_base_bdevs": 2, 00:23:08.619 "num_base_bdevs_discovered": 2, 00:23:08.619 "num_base_bdevs_operational": 2, 00:23:08.619 "base_bdevs_list": [ 00:23:08.619 { 00:23:08.619 "name": "spare", 00:23:08.619 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:08.619 "is_configured": true, 00:23:08.619 "data_offset": 256, 00:23:08.619 "data_size": 7936 00:23:08.619 }, 00:23:08.619 { 00:23:08.619 "name": "BaseBdev2", 00:23:08.619 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:08.619 "is_configured": true, 00:23:08.619 "data_offset": 256, 00:23:08.619 "data_size": 7936 00:23:08.619 } 00:23:08.619 ] 00:23:08.619 }' 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.619 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.877 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:08.877 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.877 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.878 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.878 "name": "raid_bdev1", 00:23:08.878 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:08.878 "strip_size_kb": 0, 00:23:08.878 "state": "online", 00:23:08.878 "raid_level": "raid1", 00:23:08.878 "superblock": true, 00:23:08.878 "num_base_bdevs": 2, 00:23:08.878 "num_base_bdevs_discovered": 2, 00:23:08.878 "num_base_bdevs_operational": 2, 00:23:08.878 "base_bdevs_list": [ 00:23:08.878 { 00:23:08.878 "name": "spare", 00:23:08.878 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:08.878 "is_configured": true, 00:23:08.878 "data_offset": 256, 00:23:08.878 "data_size": 7936 00:23:08.878 }, 00:23:08.878 { 00:23:08.878 "name": "BaseBdev2", 00:23:08.878 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:08.878 "is_configured": true, 00:23:08.878 "data_offset": 256, 00:23:08.878 "data_size": 7936 00:23:08.878 } 00:23:08.878 ] 00:23:08.878 }' 00:23:08.878 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.878 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:09.137 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:09.137 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.137 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:09.395 [2024-11-27 04:44:56.758508] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:09.395 [2024-11-27 04:44:56.758548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:09.395 [2024-11-27 04:44:56.758649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:09.395 [2024-11-27 04:44:56.758743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:09.395 [2024-11-27 04:44:56.758763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:09.395 04:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:09.654 /dev/nbd0 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.654 1+0 records in 00:23:09.654 1+0 records out 00:23:09.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311441 s, 13.2 MB/s 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:09.654 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:09.912 /dev/nbd1 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.912 1+0 records in 00:23:09.912 1+0 records out 00:23:09.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376522 s, 10.9 MB/s 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:09.912 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:10.171 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:10.171 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:10.171 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:10.171 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:10.171 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:23:10.171 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:10.171 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:10.487 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:10.487 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:10.487 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:10.487 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:10.487 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:10.487 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:10.487 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:23:10.487 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:23:10.487 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:10.487 04:44:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:10.746 [2024-11-27 04:44:58.214393] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:10.746 [2024-11-27 04:44:58.214610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.746 [2024-11-27 04:44:58.214658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:10.746 [2024-11-27 04:44:58.214674] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.746 [2024-11-27 04:44:58.217566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.746 [2024-11-27 04:44:58.217613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:10.746 [2024-11-27 04:44:58.217745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:10.746 [2024-11-27 04:44:58.217829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:10.746 [2024-11-27 04:44:58.218036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:10.746 spare 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:10.746 [2024-11-27 04:44:58.318171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:10.746 [2024-11-27 04:44:58.318233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:10.746 [2024-11-27 04:44:58.318642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:23:10.746 [2024-11-27 04:44:58.318947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:10.746 [2024-11-27 04:44:58.318969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:10.746 [2024-11-27 04:44:58.319223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:10.746 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:10.747 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:10.747 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.747 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.747 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.747 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.747 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.747 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.747 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:10.747 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.747 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.005 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.005 "name": "raid_bdev1", 00:23:11.005 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:11.005 "strip_size_kb": 0, 00:23:11.005 "state": "online", 00:23:11.005 "raid_level": "raid1", 00:23:11.005 "superblock": true, 00:23:11.005 "num_base_bdevs": 2, 00:23:11.005 "num_base_bdevs_discovered": 2, 00:23:11.005 "num_base_bdevs_operational": 2, 00:23:11.005 "base_bdevs_list": [ 00:23:11.005 { 00:23:11.005 "name": "spare", 00:23:11.005 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:11.005 "is_configured": true, 00:23:11.005 "data_offset": 256, 00:23:11.005 "data_size": 7936 00:23:11.005 }, 00:23:11.005 { 00:23:11.005 "name": "BaseBdev2", 00:23:11.005 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:11.005 "is_configured": true, 00:23:11.005 "data_offset": 256, 00:23:11.005 "data_size": 7936 00:23:11.005 } 00:23:11.005 ] 00:23:11.005 }' 00:23:11.005 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.005 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:11.263 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:11.263 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:11.263 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:11.263 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:11.263 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:11.263 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.263 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.263 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:11.263 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.263 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.524 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:11.524 "name": "raid_bdev1", 00:23:11.524 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:11.524 "strip_size_kb": 0, 00:23:11.524 "state": "online", 00:23:11.524 "raid_level": "raid1", 00:23:11.524 "superblock": true, 00:23:11.524 "num_base_bdevs": 2, 00:23:11.524 "num_base_bdevs_discovered": 2, 00:23:11.524 "num_base_bdevs_operational": 2, 00:23:11.524 "base_bdevs_list": [ 00:23:11.524 { 00:23:11.524 "name": "spare", 00:23:11.524 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:11.524 "is_configured": true, 00:23:11.524 "data_offset": 256, 00:23:11.524 "data_size": 7936 00:23:11.524 }, 00:23:11.524 { 00:23:11.524 "name": "BaseBdev2", 00:23:11.524 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:11.524 "is_configured": true, 00:23:11.524 "data_offset": 256, 00:23:11.524 "data_size": 7936 00:23:11.524 } 00:23:11.524 ] 00:23:11.524 }' 00:23:11.524 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:11.524 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:11.524 04:44:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:11.524 [2024-11-27 04:44:59.063384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.524 "name": "raid_bdev1", 00:23:11.524 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:11.524 "strip_size_kb": 0, 00:23:11.524 "state": "online", 00:23:11.524 "raid_level": "raid1", 00:23:11.524 "superblock": true, 00:23:11.524 "num_base_bdevs": 2, 00:23:11.524 "num_base_bdevs_discovered": 1, 00:23:11.524 "num_base_bdevs_operational": 1, 00:23:11.524 "base_bdevs_list": [ 00:23:11.524 { 00:23:11.524 "name": null, 00:23:11.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.524 "is_configured": false, 00:23:11.524 "data_offset": 0, 00:23:11.524 "data_size": 7936 00:23:11.524 }, 00:23:11.524 { 00:23:11.524 "name": "BaseBdev2", 00:23:11.524 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:11.524 "is_configured": true, 00:23:11.524 "data_offset": 256, 00:23:11.524 "data_size": 7936 00:23:11.524 } 00:23:11.524 ] 00:23:11.524 }' 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.524 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:12.092 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:12.092 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.092 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:12.092 [2024-11-27 04:44:59.555559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:12.092 [2024-11-27 04:44:59.556003] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:12.092 [2024-11-27 04:44:59.556043] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:12.092 [2024-11-27 04:44:59.556094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:12.092 [2024-11-27 04:44:59.572382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:23:12.092 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.092 04:44:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:12.092 [2024-11-27 04:44:59.574994] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:13.027 "name": "raid_bdev1", 00:23:13.027 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:13.027 "strip_size_kb": 0, 00:23:13.027 "state": "online", 00:23:13.027 "raid_level": "raid1", 00:23:13.027 "superblock": true, 00:23:13.027 "num_base_bdevs": 2, 00:23:13.027 "num_base_bdevs_discovered": 2, 00:23:13.027 "num_base_bdevs_operational": 2, 00:23:13.027 "process": { 00:23:13.027 "type": "rebuild", 00:23:13.027 "target": "spare", 00:23:13.027 "progress": { 00:23:13.027 "blocks": 2560, 00:23:13.027 "percent": 32 00:23:13.027 } 00:23:13.027 }, 00:23:13.027 "base_bdevs_list": [ 00:23:13.027 { 00:23:13.027 "name": "spare", 00:23:13.027 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:13.027 "is_configured": true, 00:23:13.027 "data_offset": 256, 00:23:13.027 "data_size": 7936 00:23:13.027 }, 00:23:13.027 { 00:23:13.027 "name": "BaseBdev2", 00:23:13.027 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:13.027 "is_configured": true, 00:23:13.027 "data_offset": 256, 00:23:13.027 "data_size": 7936 00:23:13.027 } 00:23:13.027 ] 00:23:13.027 }' 00:23:13.027 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:13.285 [2024-11-27 04:45:00.736199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:13.285 [2024-11-27 04:45:00.784098] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:13.285 [2024-11-27 04:45:00.784187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.285 [2024-11-27 04:45:00.784212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:13.285 [2024-11-27 04:45:00.784227] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.285 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.285 "name": "raid_bdev1", 00:23:13.285 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:13.285 "strip_size_kb": 0, 00:23:13.285 "state": "online", 00:23:13.285 "raid_level": "raid1", 00:23:13.285 "superblock": true, 00:23:13.285 "num_base_bdevs": 2, 00:23:13.285 "num_base_bdevs_discovered": 1, 00:23:13.285 "num_base_bdevs_operational": 1, 00:23:13.285 "base_bdevs_list": [ 00:23:13.285 { 00:23:13.285 "name": null, 00:23:13.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.285 "is_configured": false, 00:23:13.285 "data_offset": 0, 00:23:13.285 "data_size": 7936 00:23:13.285 }, 00:23:13.285 { 00:23:13.285 "name": "BaseBdev2", 00:23:13.285 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:13.285 "is_configured": true, 00:23:13.285 "data_offset": 256, 00:23:13.286 "data_size": 7936 00:23:13.286 } 00:23:13.286 ] 00:23:13.286 }' 00:23:13.286 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.286 04:45:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:13.852 04:45:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:13.852 04:45:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.852 04:45:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:13.852 [2024-11-27 04:45:01.324034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:13.852 [2024-11-27 04:45:01.324118] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.852 [2024-11-27 04:45:01.324150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:13.852 [2024-11-27 04:45:01.324168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.852 [2024-11-27 04:45:01.324797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.852 [2024-11-27 04:45:01.324835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:13.852 [2024-11-27 04:45:01.324971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:13.852 [2024-11-27 04:45:01.324997] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:13.852 [2024-11-27 04:45:01.325011] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:13.852 [2024-11-27 04:45:01.325048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:13.852 spare 00:23:13.852 [2024-11-27 04:45:01.340910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:23:13.852 04:45:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.852 04:45:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:13.852 [2024-11-27 04:45:01.343480] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:14.787 "name": "raid_bdev1", 00:23:14.787 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:14.787 "strip_size_kb": 0, 00:23:14.787 "state": "online", 00:23:14.787 "raid_level": "raid1", 00:23:14.787 "superblock": true, 00:23:14.787 "num_base_bdevs": 2, 00:23:14.787 "num_base_bdevs_discovered": 2, 00:23:14.787 "num_base_bdevs_operational": 2, 00:23:14.787 "process": { 00:23:14.787 "type": "rebuild", 00:23:14.787 "target": "spare", 00:23:14.787 "progress": { 00:23:14.787 "blocks": 2560, 00:23:14.787 "percent": 32 00:23:14.787 } 00:23:14.787 }, 00:23:14.787 "base_bdevs_list": [ 00:23:14.787 { 00:23:14.787 "name": "spare", 00:23:14.787 "uuid": "ffa480a3-a0a7-50ae-82b1-9c2bf14b9acd", 00:23:14.787 "is_configured": true, 00:23:14.787 "data_offset": 256, 00:23:14.787 "data_size": 7936 00:23:14.787 }, 00:23:14.787 { 00:23:14.787 "name": "BaseBdev2", 00:23:14.787 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:14.787 "is_configured": true, 00:23:14.787 "data_offset": 256, 00:23:14.787 "data_size": 7936 00:23:14.787 } 00:23:14.787 ] 00:23:14.787 }' 00:23:14.787 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:15.047 [2024-11-27 04:45:02.513009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:15.047 [2024-11-27 04:45:02.552932] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:15.047 [2024-11-27 04:45:02.553230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.047 [2024-11-27 04:45:02.553265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:15.047 [2024-11-27 04:45:02.553279] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.047 "name": "raid_bdev1", 00:23:15.047 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:15.047 "strip_size_kb": 0, 00:23:15.047 "state": "online", 00:23:15.047 "raid_level": "raid1", 00:23:15.047 "superblock": true, 00:23:15.047 "num_base_bdevs": 2, 00:23:15.047 "num_base_bdevs_discovered": 1, 00:23:15.047 "num_base_bdevs_operational": 1, 00:23:15.047 "base_bdevs_list": [ 00:23:15.047 { 00:23:15.047 "name": null, 00:23:15.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.047 "is_configured": false, 00:23:15.047 "data_offset": 0, 00:23:15.047 "data_size": 7936 00:23:15.047 }, 00:23:15.047 { 00:23:15.047 "name": "BaseBdev2", 00:23:15.047 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:15.047 "is_configured": true, 00:23:15.047 "data_offset": 256, 00:23:15.047 "data_size": 7936 00:23:15.047 } 00:23:15.047 ] 00:23:15.047 }' 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.047 04:45:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:15.614 "name": "raid_bdev1", 00:23:15.614 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:15.614 "strip_size_kb": 0, 00:23:15.614 "state": "online", 00:23:15.614 "raid_level": "raid1", 00:23:15.614 "superblock": true, 00:23:15.614 "num_base_bdevs": 2, 00:23:15.614 "num_base_bdevs_discovered": 1, 00:23:15.614 "num_base_bdevs_operational": 1, 00:23:15.614 "base_bdevs_list": [ 00:23:15.614 { 00:23:15.614 "name": null, 00:23:15.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.614 "is_configured": false, 00:23:15.614 "data_offset": 0, 00:23:15.614 "data_size": 7936 00:23:15.614 }, 00:23:15.614 { 00:23:15.614 "name": "BaseBdev2", 00:23:15.614 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:15.614 "is_configured": true, 00:23:15.614 "data_offset": 256, 00:23:15.614 "data_size": 7936 00:23:15.614 } 00:23:15.614 ] 00:23:15.614 }' 00:23:15.614 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:15.872 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:15.872 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:15.872 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:15.872 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:15.872 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.872 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.873 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:15.873 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.873 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 [2024-11-27 04:45:03.349345] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:15.873 [2024-11-27 04:45:03.349556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.873 [2024-11-27 04:45:03.349640] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:15.873 [2024-11-27 04:45:03.349879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.873 [2024-11-27 04:45:03.350516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.873 [2024-11-27 04:45:03.350543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:15.873 [2024-11-27 04:45:03.350673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:15.873 [2024-11-27 04:45:03.350695] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:15.873 [2024-11-27 04:45:03.350712] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:15.873 [2024-11-27 04:45:03.350725] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:15.873 BaseBdev1 00:23:15.873 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.873 04:45:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.807 "name": "raid_bdev1", 00:23:16.807 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:16.807 "strip_size_kb": 0, 00:23:16.807 "state": "online", 00:23:16.807 "raid_level": "raid1", 00:23:16.807 "superblock": true, 00:23:16.807 "num_base_bdevs": 2, 00:23:16.807 "num_base_bdevs_discovered": 1, 00:23:16.807 "num_base_bdevs_operational": 1, 00:23:16.807 "base_bdevs_list": [ 00:23:16.807 { 00:23:16.807 "name": null, 00:23:16.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.807 "is_configured": false, 00:23:16.807 "data_offset": 0, 00:23:16.807 "data_size": 7936 00:23:16.807 }, 00:23:16.807 { 00:23:16.807 "name": "BaseBdev2", 00:23:16.807 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:16.807 "is_configured": true, 00:23:16.807 "data_offset": 256, 00:23:16.807 "data_size": 7936 00:23:16.807 } 00:23:16.807 ] 00:23:16.807 }' 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.807 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:17.375 "name": "raid_bdev1", 00:23:17.375 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:17.375 "strip_size_kb": 0, 00:23:17.375 "state": "online", 00:23:17.375 "raid_level": "raid1", 00:23:17.375 "superblock": true, 00:23:17.375 "num_base_bdevs": 2, 00:23:17.375 "num_base_bdevs_discovered": 1, 00:23:17.375 "num_base_bdevs_operational": 1, 00:23:17.375 "base_bdevs_list": [ 00:23:17.375 { 00:23:17.375 "name": null, 00:23:17.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.375 "is_configured": false, 00:23:17.375 "data_offset": 0, 00:23:17.375 "data_size": 7936 00:23:17.375 }, 00:23:17.375 { 00:23:17.375 "name": "BaseBdev2", 00:23:17.375 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:17.375 "is_configured": true, 00:23:17.375 "data_offset": 256, 00:23:17.375 "data_size": 7936 00:23:17.375 } 00:23:17.375 ] 00:23:17.375 }' 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.375 04:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.633 [2024-11-27 04:45:04.997879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:17.633 request: 00:23:17.633 { 00:23:17.633 "base_bdev": "BaseBdev1", 00:23:17.633 "raid_bdev": "raid_bdev1", 00:23:17.633 "method": "bdev_raid_add_base_bdev", 00:23:17.633 "req_id": 1 00:23:17.633 } 00:23:17.633 Got JSON-RPC error response 00:23:17.633 response: 00:23:17.633 { 00:23:17.633 [2024-11-27 04:45:04.999118] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:17.633 [2024-11-27 04:45:04.999150] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:17.633 "code": -22, 00:23:17.633 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:17.633 } 00:23:17.633 04:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:17.633 04:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:23:17.633 04:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.633 04:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.633 04:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.633 04:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.593 "name": "raid_bdev1", 00:23:18.593 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:18.593 "strip_size_kb": 0, 00:23:18.593 "state": "online", 00:23:18.593 "raid_level": "raid1", 00:23:18.593 "superblock": true, 00:23:18.593 "num_base_bdevs": 2, 00:23:18.593 "num_base_bdevs_discovered": 1, 00:23:18.593 "num_base_bdevs_operational": 1, 00:23:18.593 "base_bdevs_list": [ 00:23:18.593 { 00:23:18.593 "name": null, 00:23:18.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.593 "is_configured": false, 00:23:18.593 "data_offset": 0, 00:23:18.593 "data_size": 7936 00:23:18.593 }, 00:23:18.593 { 00:23:18.593 "name": "BaseBdev2", 00:23:18.593 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:18.593 "is_configured": true, 00:23:18.593 "data_offset": 256, 00:23:18.593 "data_size": 7936 00:23:18.593 } 00:23:18.593 ] 00:23:18.593 }' 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.593 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:19.160 "name": "raid_bdev1", 00:23:19.160 "uuid": "387ea112-53c5-4cbb-829b-a141dc4a9e14", 00:23:19.160 "strip_size_kb": 0, 00:23:19.160 "state": "online", 00:23:19.160 "raid_level": "raid1", 00:23:19.160 "superblock": true, 00:23:19.160 "num_base_bdevs": 2, 00:23:19.160 "num_base_bdevs_discovered": 1, 00:23:19.160 "num_base_bdevs_operational": 1, 00:23:19.160 "base_bdevs_list": [ 00:23:19.160 { 00:23:19.160 "name": null, 00:23:19.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.160 "is_configured": false, 00:23:19.160 "data_offset": 0, 00:23:19.160 "data_size": 7936 00:23:19.160 }, 00:23:19.160 { 00:23:19.160 "name": "BaseBdev2", 00:23:19.160 "uuid": "e1406efd-2a13-50e8-a41e-c521e557b06a", 00:23:19.160 "is_configured": true, 00:23:19.160 "data_offset": 256, 00:23:19.160 "data_size": 7936 00:23:19.160 } 00:23:19.160 ] 00:23:19.160 }' 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87074 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87074 ']' 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87074 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87074 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87074' 00:23:19.160 killing process with pid 87074 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87074 00:23:19.160 Received shutdown signal, test time was about 60.000000 seconds 00:23:19.160 00:23:19.160 Latency(us) 00:23:19.160 [2024-11-27T04:45:06.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.160 [2024-11-27T04:45:06.783Z] =================================================================================================================== 00:23:19.160 [2024-11-27T04:45:06.783Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.160 [2024-11-27 04:45:06.727896] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:19.160 04:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87074 00:23:19.160 [2024-11-27 04:45:06.728050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:19.160 [2024-11-27 04:45:06.728143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:19.160 [2024-11-27 04:45:06.728168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:19.569 [2024-11-27 04:45:06.994944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:20.502 04:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:23:20.502 00:23:20.502 real 0m21.497s 00:23:20.502 user 0m29.117s 00:23:20.502 sys 0m2.423s 00:23:20.502 04:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.502 04:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:20.502 ************************************ 00:23:20.502 END TEST raid_rebuild_test_sb_4k 00:23:20.502 ************************************ 00:23:20.761 04:45:08 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:23:20.761 04:45:08 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:23:20.761 04:45:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:20.761 04:45:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.761 04:45:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:20.761 ************************************ 00:23:20.761 START TEST raid_state_function_test_sb_md_separate 00:23:20.761 ************************************ 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87785 00:23:20.761 Process raid pid: 87785 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87785' 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87785 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87785 ']' 00:23:20.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.761 04:45:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.761 [2024-11-27 04:45:08.242038] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:20.761 [2024-11-27 04:45:08.242340] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.020 [2024-11-27 04:45:08.420852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.020 [2024-11-27 04:45:08.559872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.279 [2024-11-27 04:45:08.774193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:21.279 [2024-11-27 04:45:08.774249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:21.845 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.846 [2024-11-27 04:45:09.221693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:21.846 [2024-11-27 04:45:09.221963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:21.846 [2024-11-27 04:45:09.222109] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:21.846 [2024-11-27 04:45:09.222143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.846 "name": "Existed_Raid", 00:23:21.846 "uuid": "bdfaef98-08fa-43f6-ab21-452b5627e937", 00:23:21.846 "strip_size_kb": 0, 00:23:21.846 "state": "configuring", 00:23:21.846 "raid_level": "raid1", 00:23:21.846 "superblock": true, 00:23:21.846 "num_base_bdevs": 2, 00:23:21.846 "num_base_bdevs_discovered": 0, 00:23:21.846 "num_base_bdevs_operational": 2, 00:23:21.846 "base_bdevs_list": [ 00:23:21.846 { 00:23:21.846 "name": "BaseBdev1", 00:23:21.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.846 "is_configured": false, 00:23:21.846 "data_offset": 0, 00:23:21.846 "data_size": 0 00:23:21.846 }, 00:23:21.846 { 00:23:21.846 "name": "BaseBdev2", 00:23:21.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.846 "is_configured": false, 00:23:21.846 "data_offset": 0, 00:23:21.846 "data_size": 0 00:23:21.846 } 00:23:21.846 ] 00:23:21.846 }' 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.846 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.412 [2024-11-27 04:45:09.737812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:22.412 [2024-11-27 04:45:09.738005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.412 [2024-11-27 04:45:09.745763] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:22.412 [2024-11-27 04:45:09.745956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:22.412 [2024-11-27 04:45:09.745982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:22.412 [2024-11-27 04:45:09.746003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.412 [2024-11-27 04:45:09.793370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.412 BaseBdev1 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:22.412 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 [ 00:23:22.413 { 00:23:22.413 "name": "BaseBdev1", 00:23:22.413 "aliases": [ 00:23:22.413 "f2aa4c42-f302-4f97-a5ca-2f12acdfa879" 00:23:22.413 ], 00:23:22.413 "product_name": "Malloc disk", 00:23:22.413 "block_size": 4096, 00:23:22.413 "num_blocks": 8192, 00:23:22.413 "uuid": "f2aa4c42-f302-4f97-a5ca-2f12acdfa879", 00:23:22.413 "md_size": 32, 00:23:22.413 "md_interleave": false, 00:23:22.413 "dif_type": 0, 00:23:22.413 "assigned_rate_limits": { 00:23:22.413 "rw_ios_per_sec": 0, 00:23:22.413 "rw_mbytes_per_sec": 0, 00:23:22.413 "r_mbytes_per_sec": 0, 00:23:22.413 "w_mbytes_per_sec": 0 00:23:22.413 }, 00:23:22.413 "claimed": true, 00:23:22.413 "claim_type": "exclusive_write", 00:23:22.413 "zoned": false, 00:23:22.413 "supported_io_types": { 00:23:22.413 "read": true, 00:23:22.413 "write": true, 00:23:22.413 "unmap": true, 00:23:22.413 "flush": true, 00:23:22.413 "reset": true, 00:23:22.413 "nvme_admin": false, 00:23:22.413 "nvme_io": false, 00:23:22.413 "nvme_io_md": false, 00:23:22.413 "write_zeroes": true, 00:23:22.413 "zcopy": true, 00:23:22.413 "get_zone_info": false, 00:23:22.413 "zone_management": false, 00:23:22.413 "zone_append": false, 00:23:22.413 "compare": false, 00:23:22.413 "compare_and_write": false, 00:23:22.413 "abort": true, 00:23:22.413 "seek_hole": false, 00:23:22.413 "seek_data": false, 00:23:22.413 "copy": true, 00:23:22.413 "nvme_iov_md": false 00:23:22.413 }, 00:23:22.413 "memory_domains": [ 00:23:22.413 { 00:23:22.413 "dma_device_id": "system", 00:23:22.413 "dma_device_type": 1 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.413 "dma_device_type": 2 00:23:22.413 } 00:23:22.413 ], 00:23:22.413 "driver_specific": {} 00:23:22.413 } 00:23:22.413 ] 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.413 "name": "Existed_Raid", 00:23:22.413 "uuid": "327233b8-20c8-4491-ab5e-631b6c91020c", 00:23:22.413 "strip_size_kb": 0, 00:23:22.413 "state": "configuring", 00:23:22.413 "raid_level": "raid1", 00:23:22.413 "superblock": true, 00:23:22.413 "num_base_bdevs": 2, 00:23:22.413 "num_base_bdevs_discovered": 1, 00:23:22.413 "num_base_bdevs_operational": 2, 00:23:22.413 "base_bdevs_list": [ 00:23:22.413 { 00:23:22.413 "name": "BaseBdev1", 00:23:22.413 "uuid": "f2aa4c42-f302-4f97-a5ca-2f12acdfa879", 00:23:22.413 "is_configured": true, 00:23:22.413 "data_offset": 256, 00:23:22.413 "data_size": 7936 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "name": "BaseBdev2", 00:23:22.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.413 "is_configured": false, 00:23:22.413 "data_offset": 0, 00:23:22.413 "data_size": 0 00:23:22.413 } 00:23:22.413 ] 00:23:22.413 }' 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.413 04:45:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.980 [2024-11-27 04:45:10.357562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:22.980 [2024-11-27 04:45:10.357624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.980 [2024-11-27 04:45:10.369567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.980 [2024-11-27 04:45:10.372318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:22.980 [2024-11-27 04:45:10.372482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.980 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.980 "name": "Existed_Raid", 00:23:22.980 "uuid": "bd554ba2-7a70-4bf2-9f59-22f76679f0cf", 00:23:22.980 "strip_size_kb": 0, 00:23:22.980 "state": "configuring", 00:23:22.980 "raid_level": "raid1", 00:23:22.980 "superblock": true, 00:23:22.980 "num_base_bdevs": 2, 00:23:22.980 "num_base_bdevs_discovered": 1, 00:23:22.980 "num_base_bdevs_operational": 2, 00:23:22.980 "base_bdevs_list": [ 00:23:22.981 { 00:23:22.981 "name": "BaseBdev1", 00:23:22.981 "uuid": "f2aa4c42-f302-4f97-a5ca-2f12acdfa879", 00:23:22.981 "is_configured": true, 00:23:22.981 "data_offset": 256, 00:23:22.981 "data_size": 7936 00:23:22.981 }, 00:23:22.981 { 00:23:22.981 "name": "BaseBdev2", 00:23:22.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.981 "is_configured": false, 00:23:22.981 "data_offset": 0, 00:23:22.981 "data_size": 0 00:23:22.981 } 00:23:22.981 ] 00:23:22.981 }' 00:23:22.981 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.981 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.549 [2024-11-27 04:45:10.966371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:23.549 [2024-11-27 04:45:10.966726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:23.549 [2024-11-27 04:45:10.966750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:23.549 [2024-11-27 04:45:10.966909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:23.549 [2024-11-27 04:45:10.967086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:23.549 [2024-11-27 04:45:10.967106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:23.549 [2024-11-27 04:45:10.967220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.549 BaseBdev2 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.549 04:45:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.549 [ 00:23:23.549 { 00:23:23.549 "name": "BaseBdev2", 00:23:23.549 "aliases": [ 00:23:23.549 "ab495fd8-c960-4491-937f-1c401bdab25e" 00:23:23.549 ], 00:23:23.549 "product_name": "Malloc disk", 00:23:23.549 "block_size": 4096, 00:23:23.549 "num_blocks": 8192, 00:23:23.549 "uuid": "ab495fd8-c960-4491-937f-1c401bdab25e", 00:23:23.549 "md_size": 32, 00:23:23.549 "md_interleave": false, 00:23:23.549 "dif_type": 0, 00:23:23.549 "assigned_rate_limits": { 00:23:23.549 "rw_ios_per_sec": 0, 00:23:23.549 "rw_mbytes_per_sec": 0, 00:23:23.549 "r_mbytes_per_sec": 0, 00:23:23.549 "w_mbytes_per_sec": 0 00:23:23.549 }, 00:23:23.549 "claimed": true, 00:23:23.549 "claim_type": "exclusive_write", 00:23:23.549 "zoned": false, 00:23:23.549 "supported_io_types": { 00:23:23.549 "read": true, 00:23:23.549 "write": true, 00:23:23.549 "unmap": true, 00:23:23.549 "flush": true, 00:23:23.549 "reset": true, 00:23:23.549 "nvme_admin": false, 00:23:23.549 "nvme_io": false, 00:23:23.549 "nvme_io_md": false, 00:23:23.549 "write_zeroes": true, 00:23:23.549 "zcopy": true, 00:23:23.549 "get_zone_info": false, 00:23:23.549 "zone_management": false, 00:23:23.549 "zone_append": false, 00:23:23.549 "compare": false, 00:23:23.549 "compare_and_write": false, 00:23:23.549 "abort": true, 00:23:23.549 "seek_hole": false, 00:23:23.549 "seek_data": false, 00:23:23.549 "copy": true, 00:23:23.549 "nvme_iov_md": false 00:23:23.549 }, 00:23:23.549 "memory_domains": [ 00:23:23.549 { 00:23:23.549 "dma_device_id": "system", 00:23:23.549 "dma_device_type": 1 00:23:23.549 }, 00:23:23.549 { 00:23:23.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.549 "dma_device_type": 2 00:23:23.549 } 00:23:23.549 ], 00:23:23.549 "driver_specific": {} 00:23:23.549 } 00:23:23.549 ] 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.549 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.550 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.550 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.550 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.550 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.550 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.550 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.550 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.550 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.550 "name": "Existed_Raid", 00:23:23.550 "uuid": "bd554ba2-7a70-4bf2-9f59-22f76679f0cf", 00:23:23.550 "strip_size_kb": 0, 00:23:23.550 "state": "online", 00:23:23.550 "raid_level": "raid1", 00:23:23.550 "superblock": true, 00:23:23.550 "num_base_bdevs": 2, 00:23:23.550 "num_base_bdevs_discovered": 2, 00:23:23.550 "num_base_bdevs_operational": 2, 00:23:23.550 "base_bdevs_list": [ 00:23:23.550 { 00:23:23.550 "name": "BaseBdev1", 00:23:23.550 "uuid": "f2aa4c42-f302-4f97-a5ca-2f12acdfa879", 00:23:23.550 "is_configured": true, 00:23:23.550 "data_offset": 256, 00:23:23.550 "data_size": 7936 00:23:23.550 }, 00:23:23.550 { 00:23:23.550 "name": "BaseBdev2", 00:23:23.550 "uuid": "ab495fd8-c960-4491-937f-1c401bdab25e", 00:23:23.550 "is_configured": true, 00:23:23.550 "data_offset": 256, 00:23:23.550 "data_size": 7936 00:23:23.550 } 00:23:23.550 ] 00:23:23.550 }' 00:23:23.550 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.550 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.118 [2024-11-27 04:45:11.555019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:24.118 "name": "Existed_Raid", 00:23:24.118 "aliases": [ 00:23:24.118 "bd554ba2-7a70-4bf2-9f59-22f76679f0cf" 00:23:24.118 ], 00:23:24.118 "product_name": "Raid Volume", 00:23:24.118 "block_size": 4096, 00:23:24.118 "num_blocks": 7936, 00:23:24.118 "uuid": "bd554ba2-7a70-4bf2-9f59-22f76679f0cf", 00:23:24.118 "md_size": 32, 00:23:24.118 "md_interleave": false, 00:23:24.118 "dif_type": 0, 00:23:24.118 "assigned_rate_limits": { 00:23:24.118 "rw_ios_per_sec": 0, 00:23:24.118 "rw_mbytes_per_sec": 0, 00:23:24.118 "r_mbytes_per_sec": 0, 00:23:24.118 "w_mbytes_per_sec": 0 00:23:24.118 }, 00:23:24.118 "claimed": false, 00:23:24.118 "zoned": false, 00:23:24.118 "supported_io_types": { 00:23:24.118 "read": true, 00:23:24.118 "write": true, 00:23:24.118 "unmap": false, 00:23:24.118 "flush": false, 00:23:24.118 "reset": true, 00:23:24.118 "nvme_admin": false, 00:23:24.118 "nvme_io": false, 00:23:24.118 "nvme_io_md": false, 00:23:24.118 "write_zeroes": true, 00:23:24.118 "zcopy": false, 00:23:24.118 "get_zone_info": false, 00:23:24.118 "zone_management": false, 00:23:24.118 "zone_append": false, 00:23:24.118 "compare": false, 00:23:24.118 "compare_and_write": false, 00:23:24.118 "abort": false, 00:23:24.118 "seek_hole": false, 00:23:24.118 "seek_data": false, 00:23:24.118 "copy": false, 00:23:24.118 "nvme_iov_md": false 00:23:24.118 }, 00:23:24.118 "memory_domains": [ 00:23:24.118 { 00:23:24.118 "dma_device_id": "system", 00:23:24.118 "dma_device_type": 1 00:23:24.118 }, 00:23:24.118 { 00:23:24.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.118 "dma_device_type": 2 00:23:24.118 }, 00:23:24.118 { 00:23:24.118 "dma_device_id": "system", 00:23:24.118 "dma_device_type": 1 00:23:24.118 }, 00:23:24.118 { 00:23:24.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.118 "dma_device_type": 2 00:23:24.118 } 00:23:24.118 ], 00:23:24.118 "driver_specific": { 00:23:24.118 "raid": { 00:23:24.118 "uuid": "bd554ba2-7a70-4bf2-9f59-22f76679f0cf", 00:23:24.118 "strip_size_kb": 0, 00:23:24.118 "state": "online", 00:23:24.118 "raid_level": "raid1", 00:23:24.118 "superblock": true, 00:23:24.118 "num_base_bdevs": 2, 00:23:24.118 "num_base_bdevs_discovered": 2, 00:23:24.118 "num_base_bdevs_operational": 2, 00:23:24.118 "base_bdevs_list": [ 00:23:24.118 { 00:23:24.118 "name": "BaseBdev1", 00:23:24.118 "uuid": "f2aa4c42-f302-4f97-a5ca-2f12acdfa879", 00:23:24.118 "is_configured": true, 00:23:24.118 "data_offset": 256, 00:23:24.118 "data_size": 7936 00:23:24.118 }, 00:23:24.118 { 00:23:24.118 "name": "BaseBdev2", 00:23:24.118 "uuid": "ab495fd8-c960-4491-937f-1c401bdab25e", 00:23:24.118 "is_configured": true, 00:23:24.118 "data_offset": 256, 00:23:24.118 "data_size": 7936 00:23:24.118 } 00:23:24.118 ] 00:23:24.118 } 00:23:24.118 } 00:23:24.118 }' 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:24.118 BaseBdev2' 00:23:24.118 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.119 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:24.119 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:24.119 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:24.119 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.119 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.119 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.377 [2024-11-27 04:45:11.830732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:24.377 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.378 "name": "Existed_Raid", 00:23:24.378 "uuid": "bd554ba2-7a70-4bf2-9f59-22f76679f0cf", 00:23:24.378 "strip_size_kb": 0, 00:23:24.378 "state": "online", 00:23:24.378 "raid_level": "raid1", 00:23:24.378 "superblock": true, 00:23:24.378 "num_base_bdevs": 2, 00:23:24.378 "num_base_bdevs_discovered": 1, 00:23:24.378 "num_base_bdevs_operational": 1, 00:23:24.378 "base_bdevs_list": [ 00:23:24.378 { 00:23:24.378 "name": null, 00:23:24.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.378 "is_configured": false, 00:23:24.378 "data_offset": 0, 00:23:24.378 "data_size": 7936 00:23:24.378 }, 00:23:24.378 { 00:23:24.378 "name": "BaseBdev2", 00:23:24.378 "uuid": "ab495fd8-c960-4491-937f-1c401bdab25e", 00:23:24.378 "is_configured": true, 00:23:24.378 "data_offset": 256, 00:23:24.378 "data_size": 7936 00:23:24.378 } 00:23:24.378 ] 00:23:24.378 }' 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.378 04:45:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.996 [2024-11-27 04:45:12.501636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:24.996 [2024-11-27 04:45:12.501906] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.996 [2024-11-27 04:45:12.606979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.996 [2024-11-27 04:45:12.607104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.996 [2024-11-27 04:45:12.607130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.996 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87785 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87785 ']' 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87785 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87785 00:23:25.255 killing process with pid 87785 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87785' 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87785 00:23:25.255 04:45:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87785 00:23:25.255 [2024-11-27 04:45:12.700265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:25.255 [2024-11-27 04:45:12.716981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:26.635 04:45:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:23:26.635 00:23:26.635 real 0m5.787s 00:23:26.635 user 0m8.631s 00:23:26.635 sys 0m0.819s 00:23:26.635 04:45:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.635 ************************************ 00:23:26.635 END TEST raid_state_function_test_sb_md_separate 00:23:26.635 ************************************ 00:23:26.635 04:45:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:26.635 04:45:13 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:23:26.635 04:45:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:26.635 04:45:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.635 04:45:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:26.635 ************************************ 00:23:26.635 START TEST raid_superblock_test_md_separate 00:23:26.635 ************************************ 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88043 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88043 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88043 ']' 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.635 04:45:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:26.635 [2024-11-27 04:45:14.093813] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:26.635 [2024-11-27 04:45:14.094251] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88043 ] 00:23:26.894 [2024-11-27 04:45:14.278435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.894 [2024-11-27 04:45:14.435534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.154 [2024-11-27 04:45:14.674609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.154 [2024-11-27 04:45:14.674733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.722 malloc1 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.722 [2024-11-27 04:45:15.161819] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:27.722 [2024-11-27 04:45:15.163315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.722 [2024-11-27 04:45:15.163371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:27.722 [2024-11-27 04:45:15.163394] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.722 [2024-11-27 04:45:15.166529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.722 [2024-11-27 04:45:15.166596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:27.722 pt1 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:27.722 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.723 malloc2 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.723 [2024-11-27 04:45:15.227599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:27.723 [2024-11-27 04:45:15.228034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.723 [2024-11-27 04:45:15.228087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:27.723 [2024-11-27 04:45:15.228110] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.723 [2024-11-27 04:45:15.231079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.723 [2024-11-27 04:45:15.231258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:27.723 pt2 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.723 [2024-11-27 04:45:15.235679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:27.723 [2024-11-27 04:45:15.238489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:27.723 [2024-11-27 04:45:15.238940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:27.723 [2024-11-27 04:45:15.238974] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:27.723 [2024-11-27 04:45:15.239083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:27.723 [2024-11-27 04:45:15.239276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:27.723 [2024-11-27 04:45:15.239302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:27.723 [2024-11-27 04:45:15.239446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.723 "name": "raid_bdev1", 00:23:27.723 "uuid": "55b342fb-d772-4351-b707-0a6e628b9383", 00:23:27.723 "strip_size_kb": 0, 00:23:27.723 "state": "online", 00:23:27.723 "raid_level": "raid1", 00:23:27.723 "superblock": true, 00:23:27.723 "num_base_bdevs": 2, 00:23:27.723 "num_base_bdevs_discovered": 2, 00:23:27.723 "num_base_bdevs_operational": 2, 00:23:27.723 "base_bdevs_list": [ 00:23:27.723 { 00:23:27.723 "name": "pt1", 00:23:27.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:27.723 "is_configured": true, 00:23:27.723 "data_offset": 256, 00:23:27.723 "data_size": 7936 00:23:27.723 }, 00:23:27.723 { 00:23:27.723 "name": "pt2", 00:23:27.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:27.723 "is_configured": true, 00:23:27.723 "data_offset": 256, 00:23:27.723 "data_size": 7936 00:23:27.723 } 00:23:27.723 ] 00:23:27.723 }' 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.723 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:28.291 [2024-11-27 04:45:15.768290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:28.291 "name": "raid_bdev1", 00:23:28.291 "aliases": [ 00:23:28.291 "55b342fb-d772-4351-b707-0a6e628b9383" 00:23:28.291 ], 00:23:28.291 "product_name": "Raid Volume", 00:23:28.291 "block_size": 4096, 00:23:28.291 "num_blocks": 7936, 00:23:28.291 "uuid": "55b342fb-d772-4351-b707-0a6e628b9383", 00:23:28.291 "md_size": 32, 00:23:28.291 "md_interleave": false, 00:23:28.291 "dif_type": 0, 00:23:28.291 "assigned_rate_limits": { 00:23:28.291 "rw_ios_per_sec": 0, 00:23:28.291 "rw_mbytes_per_sec": 0, 00:23:28.291 "r_mbytes_per_sec": 0, 00:23:28.291 "w_mbytes_per_sec": 0 00:23:28.291 }, 00:23:28.291 "claimed": false, 00:23:28.291 "zoned": false, 00:23:28.291 "supported_io_types": { 00:23:28.291 "read": true, 00:23:28.291 "write": true, 00:23:28.291 "unmap": false, 00:23:28.291 "flush": false, 00:23:28.291 "reset": true, 00:23:28.291 "nvme_admin": false, 00:23:28.291 "nvme_io": false, 00:23:28.291 "nvme_io_md": false, 00:23:28.291 "write_zeroes": true, 00:23:28.291 "zcopy": false, 00:23:28.291 "get_zone_info": false, 00:23:28.291 "zone_management": false, 00:23:28.291 "zone_append": false, 00:23:28.291 "compare": false, 00:23:28.291 "compare_and_write": false, 00:23:28.291 "abort": false, 00:23:28.291 "seek_hole": false, 00:23:28.291 "seek_data": false, 00:23:28.291 "copy": false, 00:23:28.291 "nvme_iov_md": false 00:23:28.291 }, 00:23:28.291 "memory_domains": [ 00:23:28.291 { 00:23:28.291 "dma_device_id": "system", 00:23:28.291 "dma_device_type": 1 00:23:28.291 }, 00:23:28.291 { 00:23:28.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.291 "dma_device_type": 2 00:23:28.291 }, 00:23:28.291 { 00:23:28.291 "dma_device_id": "system", 00:23:28.291 "dma_device_type": 1 00:23:28.291 }, 00:23:28.291 { 00:23:28.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.291 "dma_device_type": 2 00:23:28.291 } 00:23:28.291 ], 00:23:28.291 "driver_specific": { 00:23:28.291 "raid": { 00:23:28.291 "uuid": "55b342fb-d772-4351-b707-0a6e628b9383", 00:23:28.291 "strip_size_kb": 0, 00:23:28.291 "state": "online", 00:23:28.291 "raid_level": "raid1", 00:23:28.291 "superblock": true, 00:23:28.291 "num_base_bdevs": 2, 00:23:28.291 "num_base_bdevs_discovered": 2, 00:23:28.291 "num_base_bdevs_operational": 2, 00:23:28.291 "base_bdevs_list": [ 00:23:28.291 { 00:23:28.291 "name": "pt1", 00:23:28.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:28.291 "is_configured": true, 00:23:28.291 "data_offset": 256, 00:23:28.291 "data_size": 7936 00:23:28.291 }, 00:23:28.291 { 00:23:28.291 "name": "pt2", 00:23:28.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:28.291 "is_configured": true, 00:23:28.291 "data_offset": 256, 00:23:28.291 "data_size": 7936 00:23:28.291 } 00:23:28.291 ] 00:23:28.291 } 00:23:28.291 } 00:23:28.291 }' 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:28.291 pt2' 00:23:28.291 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:28.550 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:28.550 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:28.550 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:28.551 04:45:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:28.551 [2024-11-27 04:45:16.036214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=55b342fb-d772-4351-b707-0a6e628b9383 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 55b342fb-d772-4351-b707-0a6e628b9383 ']' 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.551 [2024-11-27 04:45:16.087830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:28.551 [2024-11-27 04:45:16.088196] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:28.551 [2024-11-27 04:45:16.088378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:28.551 [2024-11-27 04:45:16.088471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:28.551 [2024-11-27 04:45:16.088495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.551 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.809 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.809 [2024-11-27 04:45:16.227932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:28.809 [2024-11-27 04:45:16.230883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:28.809 [2024-11-27 04:45:16.231893] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:28.809 [2024-11-27 04:45:16.232009] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:28.809 [2024-11-27 04:45:16.232043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:28.809 [2024-11-27 04:45:16.232064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:28.809 request: 00:23:28.809 { 00:23:28.809 "name": "raid_bdev1", 00:23:28.809 "raid_level": "raid1", 00:23:28.809 "base_bdevs": [ 00:23:28.809 "malloc1", 00:23:28.809 "malloc2" 00:23:28.809 ], 00:23:28.809 "superblock": false, 00:23:28.809 "method": "bdev_raid_create", 00:23:28.809 "req_id": 1 00:23:28.809 } 00:23:28.809 Got JSON-RPC error response 00:23:28.809 response: 00:23:28.809 { 00:23:28.809 "code": -17, 00:23:28.809 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:28.809 } 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 [2024-11-27 04:45:16.300244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:28.810 [2024-11-27 04:45:16.300635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.810 [2024-11-27 04:45:16.300696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:28.810 [2024-11-27 04:45:16.300725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.810 [2024-11-27 04:45:16.303864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.810 [2024-11-27 04:45:16.303921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:28.810 [2024-11-27 04:45:16.304018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:28.810 [2024-11-27 04:45:16.304111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:28.810 pt1 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.810 "name": "raid_bdev1", 00:23:28.810 "uuid": "55b342fb-d772-4351-b707-0a6e628b9383", 00:23:28.810 "strip_size_kb": 0, 00:23:28.810 "state": "configuring", 00:23:28.810 "raid_level": "raid1", 00:23:28.810 "superblock": true, 00:23:28.810 "num_base_bdevs": 2, 00:23:28.810 "num_base_bdevs_discovered": 1, 00:23:28.810 "num_base_bdevs_operational": 2, 00:23:28.810 "base_bdevs_list": [ 00:23:28.810 { 00:23:28.810 "name": "pt1", 00:23:28.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:28.810 "is_configured": true, 00:23:28.810 "data_offset": 256, 00:23:28.810 "data_size": 7936 00:23:28.810 }, 00:23:28.810 { 00:23:28.810 "name": null, 00:23:28.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:28.810 "is_configured": false, 00:23:28.810 "data_offset": 256, 00:23:28.810 "data_size": 7936 00:23:28.810 } 00:23:28.810 ] 00:23:28.810 }' 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.810 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.376 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:29.376 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:29.376 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:29.376 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:29.376 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.376 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.376 [2024-11-27 04:45:16.864372] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:29.376 [2024-11-27 04:45:16.864560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.376 [2024-11-27 04:45:16.864604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:29.376 [2024-11-27 04:45:16.864639] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.376 [2024-11-27 04:45:16.865053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.376 [2024-11-27 04:45:16.865094] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:29.376 [2024-11-27 04:45:16.865185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:29.376 [2024-11-27 04:45:16.865232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:29.377 [2024-11-27 04:45:16.865402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:29.377 [2024-11-27 04:45:16.865428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:29.377 [2024-11-27 04:45:16.865548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:29.377 [2024-11-27 04:45:16.865719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:29.377 [2024-11-27 04:45:16.865748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:29.377 [2024-11-27 04:45:16.865920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.377 pt2 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.377 "name": "raid_bdev1", 00:23:29.377 "uuid": "55b342fb-d772-4351-b707-0a6e628b9383", 00:23:29.377 "strip_size_kb": 0, 00:23:29.377 "state": "online", 00:23:29.377 "raid_level": "raid1", 00:23:29.377 "superblock": true, 00:23:29.377 "num_base_bdevs": 2, 00:23:29.377 "num_base_bdevs_discovered": 2, 00:23:29.377 "num_base_bdevs_operational": 2, 00:23:29.377 "base_bdevs_list": [ 00:23:29.377 { 00:23:29.377 "name": "pt1", 00:23:29.377 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:29.377 "is_configured": true, 00:23:29.377 "data_offset": 256, 00:23:29.377 "data_size": 7936 00:23:29.377 }, 00:23:29.377 { 00:23:29.377 "name": "pt2", 00:23:29.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:29.377 "is_configured": true, 00:23:29.377 "data_offset": 256, 00:23:29.377 "data_size": 7936 00:23:29.377 } 00:23:29.377 ] 00:23:29.377 }' 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.377 04:45:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:29.941 [2024-11-27 04:45:17.396861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.941 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:29.941 "name": "raid_bdev1", 00:23:29.941 "aliases": [ 00:23:29.941 "55b342fb-d772-4351-b707-0a6e628b9383" 00:23:29.941 ], 00:23:29.941 "product_name": "Raid Volume", 00:23:29.941 "block_size": 4096, 00:23:29.941 "num_blocks": 7936, 00:23:29.941 "uuid": "55b342fb-d772-4351-b707-0a6e628b9383", 00:23:29.941 "md_size": 32, 00:23:29.941 "md_interleave": false, 00:23:29.941 "dif_type": 0, 00:23:29.941 "assigned_rate_limits": { 00:23:29.941 "rw_ios_per_sec": 0, 00:23:29.941 "rw_mbytes_per_sec": 0, 00:23:29.941 "r_mbytes_per_sec": 0, 00:23:29.941 "w_mbytes_per_sec": 0 00:23:29.941 }, 00:23:29.941 "claimed": false, 00:23:29.941 "zoned": false, 00:23:29.941 "supported_io_types": { 00:23:29.941 "read": true, 00:23:29.941 "write": true, 00:23:29.941 "unmap": false, 00:23:29.941 "flush": false, 00:23:29.941 "reset": true, 00:23:29.941 "nvme_admin": false, 00:23:29.941 "nvme_io": false, 00:23:29.941 "nvme_io_md": false, 00:23:29.941 "write_zeroes": true, 00:23:29.941 "zcopy": false, 00:23:29.941 "get_zone_info": false, 00:23:29.941 "zone_management": false, 00:23:29.941 "zone_append": false, 00:23:29.941 "compare": false, 00:23:29.941 "compare_and_write": false, 00:23:29.941 "abort": false, 00:23:29.941 "seek_hole": false, 00:23:29.941 "seek_data": false, 00:23:29.941 "copy": false, 00:23:29.941 "nvme_iov_md": false 00:23:29.941 }, 00:23:29.941 "memory_domains": [ 00:23:29.941 { 00:23:29.941 "dma_device_id": "system", 00:23:29.941 "dma_device_type": 1 00:23:29.941 }, 00:23:29.941 { 00:23:29.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.941 "dma_device_type": 2 00:23:29.941 }, 00:23:29.941 { 00:23:29.941 "dma_device_id": "system", 00:23:29.941 "dma_device_type": 1 00:23:29.941 }, 00:23:29.941 { 00:23:29.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.941 "dma_device_type": 2 00:23:29.941 } 00:23:29.941 ], 00:23:29.941 "driver_specific": { 00:23:29.941 "raid": { 00:23:29.941 "uuid": "55b342fb-d772-4351-b707-0a6e628b9383", 00:23:29.941 "strip_size_kb": 0, 00:23:29.941 "state": "online", 00:23:29.941 "raid_level": "raid1", 00:23:29.941 "superblock": true, 00:23:29.941 "num_base_bdevs": 2, 00:23:29.941 "num_base_bdevs_discovered": 2, 00:23:29.941 "num_base_bdevs_operational": 2, 00:23:29.941 "base_bdevs_list": [ 00:23:29.941 { 00:23:29.941 "name": "pt1", 00:23:29.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:29.941 "is_configured": true, 00:23:29.941 "data_offset": 256, 00:23:29.941 "data_size": 7936 00:23:29.941 }, 00:23:29.941 { 00:23:29.941 "name": "pt2", 00:23:29.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:29.941 "is_configured": true, 00:23:29.941 "data_offset": 256, 00:23:29.942 "data_size": 7936 00:23:29.942 } 00:23:29.942 ] 00:23:29.942 } 00:23:29.942 } 00:23:29.942 }' 00:23:29.942 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:29.942 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:29.942 pt2' 00:23:29.942 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:29.942 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:29.942 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:29.942 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:29.942 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.942 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.942 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.200 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.201 [2024-11-27 04:45:17.673041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 55b342fb-d772-4351-b707-0a6e628b9383 '!=' 55b342fb-d772-4351-b707-0a6e628b9383 ']' 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.201 [2024-11-27 04:45:17.720635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.201 "name": "raid_bdev1", 00:23:30.201 "uuid": "55b342fb-d772-4351-b707-0a6e628b9383", 00:23:30.201 "strip_size_kb": 0, 00:23:30.201 "state": "online", 00:23:30.201 "raid_level": "raid1", 00:23:30.201 "superblock": true, 00:23:30.201 "num_base_bdevs": 2, 00:23:30.201 "num_base_bdevs_discovered": 1, 00:23:30.201 "num_base_bdevs_operational": 1, 00:23:30.201 "base_bdevs_list": [ 00:23:30.201 { 00:23:30.201 "name": null, 00:23:30.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.201 "is_configured": false, 00:23:30.201 "data_offset": 0, 00:23:30.201 "data_size": 7936 00:23:30.201 }, 00:23:30.201 { 00:23:30.201 "name": "pt2", 00:23:30.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:30.201 "is_configured": true, 00:23:30.201 "data_offset": 256, 00:23:30.201 "data_size": 7936 00:23:30.201 } 00:23:30.201 ] 00:23:30.201 }' 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.201 04:45:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.768 [2024-11-27 04:45:18.244788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:30.768 [2024-11-27 04:45:18.244885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:30.768 [2024-11-27 04:45:18.245052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.768 [2024-11-27 04:45:18.245140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.768 [2024-11-27 04:45:18.245176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.768 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.768 [2024-11-27 04:45:18.324739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:30.768 [2024-11-27 04:45:18.324896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.768 [2024-11-27 04:45:18.324932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:30.768 [2024-11-27 04:45:18.324955] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.768 [2024-11-27 04:45:18.328126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.768 [2024-11-27 04:45:18.328223] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:30.768 [2024-11-27 04:45:18.328334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:30.768 [2024-11-27 04:45:18.328424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:30.768 [2024-11-27 04:45:18.328583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:30.768 [2024-11-27 04:45:18.328610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:30.768 [2024-11-27 04:45:18.328720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:30.768 [2024-11-27 04:45:18.328913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:30.768 [2024-11-27 04:45:18.328942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:30.768 [2024-11-27 04:45:18.329158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.768 pt2 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.769 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.028 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.028 "name": "raid_bdev1", 00:23:31.028 "uuid": "55b342fb-d772-4351-b707-0a6e628b9383", 00:23:31.028 "strip_size_kb": 0, 00:23:31.028 "state": "online", 00:23:31.028 "raid_level": "raid1", 00:23:31.028 "superblock": true, 00:23:31.028 "num_base_bdevs": 2, 00:23:31.028 "num_base_bdevs_discovered": 1, 00:23:31.028 "num_base_bdevs_operational": 1, 00:23:31.028 "base_bdevs_list": [ 00:23:31.028 { 00:23:31.028 "name": null, 00:23:31.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.028 "is_configured": false, 00:23:31.028 "data_offset": 256, 00:23:31.028 "data_size": 7936 00:23:31.028 }, 00:23:31.028 { 00:23:31.028 "name": "pt2", 00:23:31.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:31.028 "is_configured": true, 00:23:31.028 "data_offset": 256, 00:23:31.028 "data_size": 7936 00:23:31.028 } 00:23:31.028 ] 00:23:31.028 }' 00:23:31.028 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.028 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:31.286 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:31.286 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.286 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:31.286 [2024-11-27 04:45:18.860956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.286 [2024-11-27 04:45:18.861045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:31.286 [2024-11-27 04:45:18.861201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.286 [2024-11-27 04:45:18.861301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.286 [2024-11-27 04:45:18.861322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:31.286 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.286 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.286 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.286 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:31.286 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:31.286 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:31.545 [2024-11-27 04:45:18.929065] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:31.545 [2024-11-27 04:45:18.929264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.545 [2024-11-27 04:45:18.929316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:31.545 [2024-11-27 04:45:18.929338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.545 [2024-11-27 04:45:18.932530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.545 [2024-11-27 04:45:18.934323] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:31.545 [2024-11-27 04:45:18.934465] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:31.545 [2024-11-27 04:45:18.934556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:31.545 [2024-11-27 04:45:18.934891] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:31.545 [2024-11-27 04:45:18.934916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.545 [2024-11-27 04:45:18.934954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:31.545 [2024-11-27 04:45:18.935058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:31.545 pt1 00:23:31.545 [2024-11-27 04:45:18.935195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:31.545 [2024-11-27 04:45:18.935214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:31.545 [2024-11-27 04:45:18.935319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:31.545 [2024-11-27 04:45:18.935487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:31.545 [2024-11-27 04:45:18.935522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.545 [2024-11-27 04:45:18.935696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.545 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.545 "name": "raid_bdev1", 00:23:31.545 "uuid": "55b342fb-d772-4351-b707-0a6e628b9383", 00:23:31.545 "strip_size_kb": 0, 00:23:31.545 "state": "online", 00:23:31.545 "raid_level": "raid1", 00:23:31.545 "superblock": true, 00:23:31.545 "num_base_bdevs": 2, 00:23:31.545 "num_base_bdevs_discovered": 1, 00:23:31.546 "num_base_bdevs_operational": 1, 00:23:31.546 "base_bdevs_list": [ 00:23:31.546 { 00:23:31.546 "name": null, 00:23:31.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.546 "is_configured": false, 00:23:31.546 "data_offset": 256, 00:23:31.546 "data_size": 7936 00:23:31.546 }, 00:23:31.546 { 00:23:31.546 "name": "pt2", 00:23:31.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:31.546 "is_configured": true, 00:23:31.546 "data_offset": 256, 00:23:31.546 "data_size": 7936 00:23:31.546 } 00:23:31.546 ] 00:23:31.546 }' 00:23:31.546 04:45:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.546 04:45:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.112 [2024-11-27 04:45:19.531147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 55b342fb-d772-4351-b707-0a6e628b9383 '!=' 55b342fb-d772-4351-b707-0a6e628b9383 ']' 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88043 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88043 ']' 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88043 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88043 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88043' 00:23:32.112 killing process with pid 88043 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88043 00:23:32.112 [2024-11-27 04:45:19.625755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:32.112 04:45:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88043 00:23:32.112 [2024-11-27 04:45:19.626260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.112 [2024-11-27 04:45:19.626402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.112 [2024-11-27 04:45:19.626618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:32.371 [2024-11-27 04:45:19.860215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:33.799 ************************************ 00:23:33.799 END TEST raid_superblock_test_md_separate 00:23:33.799 ************************************ 00:23:33.799 04:45:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:23:33.799 00:23:33.799 real 0m7.111s 00:23:33.799 user 0m11.042s 00:23:33.799 sys 0m1.084s 00:23:33.799 04:45:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.799 04:45:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 04:45:21 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:23:33.799 04:45:21 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:23:33.799 04:45:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:33.799 04:45:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.799 04:45:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 ************************************ 00:23:33.799 START TEST raid_rebuild_test_sb_md_separate 00:23:33.799 ************************************ 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88379 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88379 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88379 ']' 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.799 04:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 [2024-11-27 04:45:21.267324] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:33.799 [2024-11-27 04:45:21.267768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:23:33.799 Zero copy mechanism will not be used. 00:23:33.799 -allocations --file-prefix=spdk_pid88379 ] 00:23:34.058 [2024-11-27 04:45:21.447740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.058 [2024-11-27 04:45:21.608726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.317 [2024-11-27 04:45:21.854112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:34.317 [2024-11-27 04:45:21.854201] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:34.882 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.882 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:34.882 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:34.882 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:23:34.882 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.883 BaseBdev1_malloc 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.883 [2024-11-27 04:45:22.384690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:34.883 [2024-11-27 04:45:22.384874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.883 [2024-11-27 04:45:22.384916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:34.883 [2024-11-27 04:45:22.384939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.883 [2024-11-27 04:45:22.387962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.883 [2024-11-27 04:45:22.388018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:34.883 BaseBdev1 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.883 BaseBdev2_malloc 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.883 [2024-11-27 04:45:22.449326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:34.883 [2024-11-27 04:45:22.449453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.883 [2024-11-27 04:45:22.449491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:34.883 [2024-11-27 04:45:22.449514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.883 [2024-11-27 04:45:22.452538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.883 [2024-11-27 04:45:22.452596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:34.883 BaseBdev2 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.883 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.142 spare_malloc 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.142 spare_delay 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.142 [2024-11-27 04:45:22.536111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:35.142 [2024-11-27 04:45:22.536240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.142 [2024-11-27 04:45:22.536279] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:35.142 [2024-11-27 04:45:22.536304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.142 [2024-11-27 04:45:22.539373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.142 [2024-11-27 04:45:22.539715] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:35.142 spare 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.142 [2024-11-27 04:45:22.548187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:35.142 [2024-11-27 04:45:22.551162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:35.142 [2024-11-27 04:45:22.551472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:35.142 [2024-11-27 04:45:22.551502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:35.142 [2024-11-27 04:45:22.551610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:35.142 [2024-11-27 04:45:22.551880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:35.142 [2024-11-27 04:45:22.551912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:35.142 [2024-11-27 04:45:22.552109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.142 "name": "raid_bdev1", 00:23:35.142 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:35.142 "strip_size_kb": 0, 00:23:35.142 "state": "online", 00:23:35.142 "raid_level": "raid1", 00:23:35.142 "superblock": true, 00:23:35.142 "num_base_bdevs": 2, 00:23:35.142 "num_base_bdevs_discovered": 2, 00:23:35.142 "num_base_bdevs_operational": 2, 00:23:35.142 "base_bdevs_list": [ 00:23:35.142 { 00:23:35.142 "name": "BaseBdev1", 00:23:35.142 "uuid": "9042c406-48a3-5118-bd56-1c07d86f6f24", 00:23:35.142 "is_configured": true, 00:23:35.142 "data_offset": 256, 00:23:35.142 "data_size": 7936 00:23:35.142 }, 00:23:35.142 { 00:23:35.142 "name": "BaseBdev2", 00:23:35.142 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:35.142 "is_configured": true, 00:23:35.142 "data_offset": 256, 00:23:35.142 "data_size": 7936 00:23:35.142 } 00:23:35.142 ] 00:23:35.142 }' 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.142 04:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.709 [2024-11-27 04:45:23.105093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:35.709 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:35.968 [2024-11-27 04:45:23.528779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:35.968 /dev/nbd0 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.968 1+0 records in 00:23:35.968 1+0 records out 00:23:35.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603696 s, 6.8 MB/s 00:23:35.968 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.227 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:36.227 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.227 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:36.227 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:36.227 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:36.227 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:36.227 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:36.227 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:36.227 04:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:23:37.162 7936+0 records in 00:23:37.162 7936+0 records out 00:23:37.162 32505856 bytes (33 MB, 31 MiB) copied, 1.04191 s, 31.2 MB/s 00:23:37.162 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:37.162 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:37.162 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:37.162 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:37.162 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:37.162 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.162 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:37.421 [2024-11-27 04:45:24.942576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:37.421 [2024-11-27 04:45:24.970675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:37.421 04:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.421 04:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.421 "name": "raid_bdev1", 00:23:37.421 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:37.421 "strip_size_kb": 0, 00:23:37.421 "state": "online", 00:23:37.421 "raid_level": "raid1", 00:23:37.421 "superblock": true, 00:23:37.421 "num_base_bdevs": 2, 00:23:37.421 "num_base_bdevs_discovered": 1, 00:23:37.421 "num_base_bdevs_operational": 1, 00:23:37.421 "base_bdevs_list": [ 00:23:37.421 { 00:23:37.421 "name": null, 00:23:37.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.421 "is_configured": false, 00:23:37.421 "data_offset": 0, 00:23:37.421 "data_size": 7936 00:23:37.421 }, 00:23:37.421 { 00:23:37.421 "name": "BaseBdev2", 00:23:37.421 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:37.421 "is_configured": true, 00:23:37.421 "data_offset": 256, 00:23:37.421 "data_size": 7936 00:23:37.421 } 00:23:37.421 ] 00:23:37.421 }' 00:23:37.421 04:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.421 04:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:38.103 04:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:38.103 04:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.103 04:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:38.103 [2024-11-27 04:45:25.446904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:38.103 [2024-11-27 04:45:25.460668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:23:38.103 04:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.103 04:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:38.103 [2024-11-27 04:45:25.463258] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:39.038 "name": "raid_bdev1", 00:23:39.038 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:39.038 "strip_size_kb": 0, 00:23:39.038 "state": "online", 00:23:39.038 "raid_level": "raid1", 00:23:39.038 "superblock": true, 00:23:39.038 "num_base_bdevs": 2, 00:23:39.038 "num_base_bdevs_discovered": 2, 00:23:39.038 "num_base_bdevs_operational": 2, 00:23:39.038 "process": { 00:23:39.038 "type": "rebuild", 00:23:39.038 "target": "spare", 00:23:39.038 "progress": { 00:23:39.038 "blocks": 2560, 00:23:39.038 "percent": 32 00:23:39.038 } 00:23:39.038 }, 00:23:39.038 "base_bdevs_list": [ 00:23:39.038 { 00:23:39.038 "name": "spare", 00:23:39.038 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:39.038 "is_configured": true, 00:23:39.038 "data_offset": 256, 00:23:39.038 "data_size": 7936 00:23:39.038 }, 00:23:39.038 { 00:23:39.038 "name": "BaseBdev2", 00:23:39.038 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:39.038 "is_configured": true, 00:23:39.038 "data_offset": 256, 00:23:39.038 "data_size": 7936 00:23:39.038 } 00:23:39.038 ] 00:23:39.038 }' 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:39.038 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.039 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:39.039 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.039 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:39.039 [2024-11-27 04:45:26.637531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.297 [2024-11-27 04:45:26.673185] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:39.297 [2024-11-27 04:45:26.673286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.297 [2024-11-27 04:45:26.673320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.297 [2024-11-27 04:45:26.673336] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.297 "name": "raid_bdev1", 00:23:39.297 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:39.297 "strip_size_kb": 0, 00:23:39.297 "state": "online", 00:23:39.297 "raid_level": "raid1", 00:23:39.297 "superblock": true, 00:23:39.297 "num_base_bdevs": 2, 00:23:39.297 "num_base_bdevs_discovered": 1, 00:23:39.297 "num_base_bdevs_operational": 1, 00:23:39.297 "base_bdevs_list": [ 00:23:39.297 { 00:23:39.297 "name": null, 00:23:39.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.297 "is_configured": false, 00:23:39.297 "data_offset": 0, 00:23:39.297 "data_size": 7936 00:23:39.297 }, 00:23:39.297 { 00:23:39.297 "name": "BaseBdev2", 00:23:39.297 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:39.297 "is_configured": true, 00:23:39.297 "data_offset": 256, 00:23:39.297 "data_size": 7936 00:23:39.297 } 00:23:39.297 ] 00:23:39.297 }' 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.297 04:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:39.865 "name": "raid_bdev1", 00:23:39.865 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:39.865 "strip_size_kb": 0, 00:23:39.865 "state": "online", 00:23:39.865 "raid_level": "raid1", 00:23:39.865 "superblock": true, 00:23:39.865 "num_base_bdevs": 2, 00:23:39.865 "num_base_bdevs_discovered": 1, 00:23:39.865 "num_base_bdevs_operational": 1, 00:23:39.865 "base_bdevs_list": [ 00:23:39.865 { 00:23:39.865 "name": null, 00:23:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.865 "is_configured": false, 00:23:39.865 "data_offset": 0, 00:23:39.865 "data_size": 7936 00:23:39.865 }, 00:23:39.865 { 00:23:39.865 "name": "BaseBdev2", 00:23:39.865 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:39.865 "is_configured": true, 00:23:39.865 "data_offset": 256, 00:23:39.865 "data_size": 7936 00:23:39.865 } 00:23:39.865 ] 00:23:39.865 }' 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:39.865 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:39.866 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:39.866 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.866 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:39.866 [2024-11-27 04:45:27.371889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:39.866 [2024-11-27 04:45:27.384104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:23:39.866 [2024-11-27 04:45:27.386526] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:39.866 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.866 04:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:40.801 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.801 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:40.801 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:40.801 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:40.801 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:40.801 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.801 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.801 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.801 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:40.801 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.061 "name": "raid_bdev1", 00:23:41.061 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:41.061 "strip_size_kb": 0, 00:23:41.061 "state": "online", 00:23:41.061 "raid_level": "raid1", 00:23:41.061 "superblock": true, 00:23:41.061 "num_base_bdevs": 2, 00:23:41.061 "num_base_bdevs_discovered": 2, 00:23:41.061 "num_base_bdevs_operational": 2, 00:23:41.061 "process": { 00:23:41.061 "type": "rebuild", 00:23:41.061 "target": "spare", 00:23:41.061 "progress": { 00:23:41.061 "blocks": 2560, 00:23:41.061 "percent": 32 00:23:41.061 } 00:23:41.061 }, 00:23:41.061 "base_bdevs_list": [ 00:23:41.061 { 00:23:41.061 "name": "spare", 00:23:41.061 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:41.061 "is_configured": true, 00:23:41.061 "data_offset": 256, 00:23:41.061 "data_size": 7936 00:23:41.061 }, 00:23:41.061 { 00:23:41.061 "name": "BaseBdev2", 00:23:41.061 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:41.061 "is_configured": true, 00:23:41.061 "data_offset": 256, 00:23:41.061 "data_size": 7936 00:23:41.061 } 00:23:41.061 ] 00:23:41.061 }' 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:41.061 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=769 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.061 "name": "raid_bdev1", 00:23:41.061 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:41.061 "strip_size_kb": 0, 00:23:41.061 "state": "online", 00:23:41.061 "raid_level": "raid1", 00:23:41.061 "superblock": true, 00:23:41.061 "num_base_bdevs": 2, 00:23:41.061 "num_base_bdevs_discovered": 2, 00:23:41.061 "num_base_bdevs_operational": 2, 00:23:41.061 "process": { 00:23:41.061 "type": "rebuild", 00:23:41.061 "target": "spare", 00:23:41.061 "progress": { 00:23:41.061 "blocks": 2816, 00:23:41.061 "percent": 35 00:23:41.061 } 00:23:41.061 }, 00:23:41.061 "base_bdevs_list": [ 00:23:41.061 { 00:23:41.061 "name": "spare", 00:23:41.061 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:41.061 "is_configured": true, 00:23:41.061 "data_offset": 256, 00:23:41.061 "data_size": 7936 00:23:41.061 }, 00:23:41.061 { 00:23:41.061 "name": "BaseBdev2", 00:23:41.061 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:41.061 "is_configured": true, 00:23:41.061 "data_offset": 256, 00:23:41.061 "data_size": 7936 00:23:41.061 } 00:23:41.061 ] 00:23:41.061 }' 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.061 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.320 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.320 04:45:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.255 "name": "raid_bdev1", 00:23:42.255 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:42.255 "strip_size_kb": 0, 00:23:42.255 "state": "online", 00:23:42.255 "raid_level": "raid1", 00:23:42.255 "superblock": true, 00:23:42.255 "num_base_bdevs": 2, 00:23:42.255 "num_base_bdevs_discovered": 2, 00:23:42.255 "num_base_bdevs_operational": 2, 00:23:42.255 "process": { 00:23:42.255 "type": "rebuild", 00:23:42.255 "target": "spare", 00:23:42.255 "progress": { 00:23:42.255 "blocks": 5888, 00:23:42.255 "percent": 74 00:23:42.255 } 00:23:42.255 }, 00:23:42.255 "base_bdevs_list": [ 00:23:42.255 { 00:23:42.255 "name": "spare", 00:23:42.255 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:42.255 "is_configured": true, 00:23:42.255 "data_offset": 256, 00:23:42.255 "data_size": 7936 00:23:42.255 }, 00:23:42.255 { 00:23:42.255 "name": "BaseBdev2", 00:23:42.255 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:42.255 "is_configured": true, 00:23:42.255 "data_offset": 256, 00:23:42.255 "data_size": 7936 00:23:42.255 } 00:23:42.255 ] 00:23:42.255 }' 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.255 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.514 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.514 04:45:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:43.082 [2024-11-27 04:45:30.511117] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:43.082 [2024-11-27 04:45:30.511227] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:43.082 [2024-11-27 04:45:30.511400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:43.340 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.599 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.599 "name": "raid_bdev1", 00:23:43.599 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:43.599 "strip_size_kb": 0, 00:23:43.599 "state": "online", 00:23:43.599 "raid_level": "raid1", 00:23:43.599 "superblock": true, 00:23:43.599 "num_base_bdevs": 2, 00:23:43.599 "num_base_bdevs_discovered": 2, 00:23:43.599 "num_base_bdevs_operational": 2, 00:23:43.599 "base_bdevs_list": [ 00:23:43.599 { 00:23:43.599 "name": "spare", 00:23:43.599 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:43.599 "is_configured": true, 00:23:43.599 "data_offset": 256, 00:23:43.599 "data_size": 7936 00:23:43.599 }, 00:23:43.599 { 00:23:43.599 "name": "BaseBdev2", 00:23:43.599 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:43.599 "is_configured": true, 00:23:43.599 "data_offset": 256, 00:23:43.599 "data_size": 7936 00:23:43.599 } 00:23:43.599 ] 00:23:43.599 }' 00:23:43.599 04:45:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.599 "name": "raid_bdev1", 00:23:43.599 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:43.599 "strip_size_kb": 0, 00:23:43.599 "state": "online", 00:23:43.599 "raid_level": "raid1", 00:23:43.599 "superblock": true, 00:23:43.599 "num_base_bdevs": 2, 00:23:43.599 "num_base_bdevs_discovered": 2, 00:23:43.599 "num_base_bdevs_operational": 2, 00:23:43.599 "base_bdevs_list": [ 00:23:43.599 { 00:23:43.599 "name": "spare", 00:23:43.599 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:43.599 "is_configured": true, 00:23:43.599 "data_offset": 256, 00:23:43.599 "data_size": 7936 00:23:43.599 }, 00:23:43.599 { 00:23:43.599 "name": "BaseBdev2", 00:23:43.599 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:43.599 "is_configured": true, 00:23:43.599 "data_offset": 256, 00:23:43.599 "data_size": 7936 00:23:43.599 } 00:23:43.599 ] 00:23:43.599 }' 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:43.599 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.858 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.858 "name": "raid_bdev1", 00:23:43.858 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:43.858 "strip_size_kb": 0, 00:23:43.858 "state": "online", 00:23:43.858 "raid_level": "raid1", 00:23:43.858 "superblock": true, 00:23:43.858 "num_base_bdevs": 2, 00:23:43.858 "num_base_bdevs_discovered": 2, 00:23:43.858 "num_base_bdevs_operational": 2, 00:23:43.858 "base_bdevs_list": [ 00:23:43.858 { 00:23:43.858 "name": "spare", 00:23:43.858 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:43.858 "is_configured": true, 00:23:43.858 "data_offset": 256, 00:23:43.858 "data_size": 7936 00:23:43.858 }, 00:23:43.858 { 00:23:43.858 "name": "BaseBdev2", 00:23:43.858 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:43.858 "is_configured": true, 00:23:43.858 "data_offset": 256, 00:23:43.858 "data_size": 7936 00:23:43.858 } 00:23:43.858 ] 00:23:43.858 }' 00:23:43.859 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.859 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:44.117 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:44.117 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.117 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:44.117 [2024-11-27 04:45:31.730537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:44.117 [2024-11-27 04:45:31.730589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:44.117 [2024-11-27 04:45:31.730701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:44.117 [2024-11-27 04:45:31.730811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:44.117 [2024-11-27 04:45:31.730841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:44.117 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.117 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.117 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.117 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:44.375 04:45:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:44.633 /dev/nbd0 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:44.633 1+0 records in 00:23:44.633 1+0 records out 00:23:44.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266748 s, 15.4 MB/s 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:44.633 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:44.891 /dev/nbd1 00:23:44.891 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:44.891 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:44.891 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:44.892 1+0 records in 00:23:44.892 1+0 records out 00:23:44.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440831 s, 9.3 MB/s 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:44.892 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:45.150 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:45.150 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:45.150 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:45.150 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:45.150 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:45.150 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:45.150 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:45.408 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:45.408 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:45.408 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:45.408 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:45.408 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:45.408 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:45.408 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:45.408 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:45.408 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:45.408 04:45:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:45.667 [2024-11-27 04:45:33.179238] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:45.667 [2024-11-27 04:45:33.179337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.667 [2024-11-27 04:45:33.179371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:45.667 [2024-11-27 04:45:33.179386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.667 [2024-11-27 04:45:33.182157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.667 [2024-11-27 04:45:33.182201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:45.667 [2024-11-27 04:45:33.182340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:45.667 [2024-11-27 04:45:33.182408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:45.667 [2024-11-27 04:45:33.182589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.667 spare 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.667 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:45.667 [2024-11-27 04:45:33.282732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:45.667 [2024-11-27 04:45:33.282804] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:45.667 [2024-11-27 04:45:33.282965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:23:45.667 [2024-11-27 04:45:33.283198] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:45.667 [2024-11-27 04:45:33.283224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:45.667 [2024-11-27 04:45:33.283410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.925 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.926 "name": "raid_bdev1", 00:23:45.926 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:45.926 "strip_size_kb": 0, 00:23:45.926 "state": "online", 00:23:45.926 "raid_level": "raid1", 00:23:45.926 "superblock": true, 00:23:45.926 "num_base_bdevs": 2, 00:23:45.926 "num_base_bdevs_discovered": 2, 00:23:45.926 "num_base_bdevs_operational": 2, 00:23:45.926 "base_bdevs_list": [ 00:23:45.926 { 00:23:45.926 "name": "spare", 00:23:45.926 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:45.926 "is_configured": true, 00:23:45.926 "data_offset": 256, 00:23:45.926 "data_size": 7936 00:23:45.926 }, 00:23:45.926 { 00:23:45.926 "name": "BaseBdev2", 00:23:45.926 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:45.926 "is_configured": true, 00:23:45.926 "data_offset": 256, 00:23:45.926 "data_size": 7936 00:23:45.926 } 00:23:45.926 ] 00:23:45.926 }' 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.926 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:46.211 "name": "raid_bdev1", 00:23:46.211 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:46.211 "strip_size_kb": 0, 00:23:46.211 "state": "online", 00:23:46.211 "raid_level": "raid1", 00:23:46.211 "superblock": true, 00:23:46.211 "num_base_bdevs": 2, 00:23:46.211 "num_base_bdevs_discovered": 2, 00:23:46.211 "num_base_bdevs_operational": 2, 00:23:46.211 "base_bdevs_list": [ 00:23:46.211 { 00:23:46.211 "name": "spare", 00:23:46.211 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:46.211 "is_configured": true, 00:23:46.211 "data_offset": 256, 00:23:46.211 "data_size": 7936 00:23:46.211 }, 00:23:46.211 { 00:23:46.211 "name": "BaseBdev2", 00:23:46.211 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:46.211 "is_configured": true, 00:23:46.211 "data_offset": 256, 00:23:46.211 "data_size": 7936 00:23:46.211 } 00:23:46.211 ] 00:23:46.211 }' 00:23:46.211 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:46.470 [2024-11-27 04:45:33.959708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.470 04:45:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.470 04:45:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.470 "name": "raid_bdev1", 00:23:46.470 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:46.470 "strip_size_kb": 0, 00:23:46.470 "state": "online", 00:23:46.470 "raid_level": "raid1", 00:23:46.470 "superblock": true, 00:23:46.470 "num_base_bdevs": 2, 00:23:46.470 "num_base_bdevs_discovered": 1, 00:23:46.470 "num_base_bdevs_operational": 1, 00:23:46.470 "base_bdevs_list": [ 00:23:46.470 { 00:23:46.470 "name": null, 00:23:46.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.470 "is_configured": false, 00:23:46.470 "data_offset": 0, 00:23:46.470 "data_size": 7936 00:23:46.470 }, 00:23:46.470 { 00:23:46.470 "name": "BaseBdev2", 00:23:46.470 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:46.470 "is_configured": true, 00:23:46.470 "data_offset": 256, 00:23:46.470 "data_size": 7936 00:23:46.470 } 00:23:46.470 ] 00:23:46.470 }' 00:23:46.470 04:45:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.470 04:45:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:47.063 04:45:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:47.063 04:45:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.063 04:45:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:47.063 [2024-11-27 04:45:34.471930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:47.063 [2024-11-27 04:45:34.472205] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:47.063 [2024-11-27 04:45:34.472234] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:47.063 [2024-11-27 04:45:34.472320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:47.063 [2024-11-27 04:45:34.486269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:23:47.063 04:45:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.063 04:45:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:47.063 [2024-11-27 04:45:34.489595] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:47.999 "name": "raid_bdev1", 00:23:47.999 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:47.999 "strip_size_kb": 0, 00:23:47.999 "state": "online", 00:23:47.999 "raid_level": "raid1", 00:23:47.999 "superblock": true, 00:23:47.999 "num_base_bdevs": 2, 00:23:47.999 "num_base_bdevs_discovered": 2, 00:23:47.999 "num_base_bdevs_operational": 2, 00:23:47.999 "process": { 00:23:47.999 "type": "rebuild", 00:23:47.999 "target": "spare", 00:23:47.999 "progress": { 00:23:47.999 "blocks": 2560, 00:23:47.999 "percent": 32 00:23:47.999 } 00:23:47.999 }, 00:23:47.999 "base_bdevs_list": [ 00:23:47.999 { 00:23:47.999 "name": "spare", 00:23:47.999 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:47.999 "is_configured": true, 00:23:47.999 "data_offset": 256, 00:23:47.999 "data_size": 7936 00:23:47.999 }, 00:23:47.999 { 00:23:47.999 "name": "BaseBdev2", 00:23:47.999 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:47.999 "is_configured": true, 00:23:47.999 "data_offset": 256, 00:23:47.999 "data_size": 7936 00:23:47.999 } 00:23:47.999 ] 00:23:47.999 }' 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.999 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:48.258 [2024-11-27 04:45:35.655769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:48.258 [2024-11-27 04:45:35.699724] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:48.258 [2024-11-27 04:45:35.699844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.258 [2024-11-27 04:45:35.699869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:48.258 [2024-11-27 04:45:35.699909] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:48.258 "name": "raid_bdev1", 00:23:48.258 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:48.258 "strip_size_kb": 0, 00:23:48.258 "state": "online", 00:23:48.258 "raid_level": "raid1", 00:23:48.258 "superblock": true, 00:23:48.258 "num_base_bdevs": 2, 00:23:48.258 "num_base_bdevs_discovered": 1, 00:23:48.258 "num_base_bdevs_operational": 1, 00:23:48.258 "base_bdevs_list": [ 00:23:48.258 { 00:23:48.258 "name": null, 00:23:48.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.258 "is_configured": false, 00:23:48.258 "data_offset": 0, 00:23:48.258 "data_size": 7936 00:23:48.258 }, 00:23:48.258 { 00:23:48.258 "name": "BaseBdev2", 00:23:48.258 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:48.258 "is_configured": true, 00:23:48.258 "data_offset": 256, 00:23:48.258 "data_size": 7936 00:23:48.258 } 00:23:48.258 ] 00:23:48.258 }' 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:48.258 04:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:48.825 04:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:48.825 04:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.825 04:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:48.825 [2024-11-27 04:45:36.262519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:48.825 [2024-11-27 04:45:36.262627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.825 [2024-11-27 04:45:36.262664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:48.825 [2024-11-27 04:45:36.262684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.825 [2024-11-27 04:45:36.263036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.825 [2024-11-27 04:45:36.263079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:48.825 [2024-11-27 04:45:36.263166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:48.825 [2024-11-27 04:45:36.263191] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:48.825 [2024-11-27 04:45:36.263206] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:48.825 [2024-11-27 04:45:36.263249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:48.825 [2024-11-27 04:45:36.276835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:23:48.825 spare 00:23:48.825 04:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.825 04:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:48.825 [2024-11-27 04:45:36.279482] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.758 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:49.758 "name": "raid_bdev1", 00:23:49.758 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:49.758 "strip_size_kb": 0, 00:23:49.758 "state": "online", 00:23:49.758 "raid_level": "raid1", 00:23:49.759 "superblock": true, 00:23:49.759 "num_base_bdevs": 2, 00:23:49.759 "num_base_bdevs_discovered": 2, 00:23:49.759 "num_base_bdevs_operational": 2, 00:23:49.759 "process": { 00:23:49.759 "type": "rebuild", 00:23:49.759 "target": "spare", 00:23:49.759 "progress": { 00:23:49.759 "blocks": 2560, 00:23:49.759 "percent": 32 00:23:49.759 } 00:23:49.759 }, 00:23:49.759 "base_bdevs_list": [ 00:23:49.759 { 00:23:49.759 "name": "spare", 00:23:49.759 "uuid": "3405ada8-9cdf-5d95-9f6d-e6d9ad0b8ace", 00:23:49.759 "is_configured": true, 00:23:49.759 "data_offset": 256, 00:23:49.759 "data_size": 7936 00:23:49.759 }, 00:23:49.759 { 00:23:49.759 "name": "BaseBdev2", 00:23:49.759 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:49.759 "is_configured": true, 00:23:49.759 "data_offset": 256, 00:23:49.759 "data_size": 7936 00:23:49.759 } 00:23:49.759 ] 00:23:49.759 }' 00:23:49.759 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:50.017 [2024-11-27 04:45:37.452985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:50.017 [2024-11-27 04:45:37.488849] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:50.017 [2024-11-27 04:45:37.488941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.017 [2024-11-27 04:45:37.488968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:50.017 [2024-11-27 04:45:37.488980] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:50.017 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.018 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:50.018 "name": "raid_bdev1", 00:23:50.018 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:50.018 "strip_size_kb": 0, 00:23:50.018 "state": "online", 00:23:50.018 "raid_level": "raid1", 00:23:50.018 "superblock": true, 00:23:50.018 "num_base_bdevs": 2, 00:23:50.018 "num_base_bdevs_discovered": 1, 00:23:50.018 "num_base_bdevs_operational": 1, 00:23:50.018 "base_bdevs_list": [ 00:23:50.018 { 00:23:50.018 "name": null, 00:23:50.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.018 "is_configured": false, 00:23:50.018 "data_offset": 0, 00:23:50.018 "data_size": 7936 00:23:50.018 }, 00:23:50.018 { 00:23:50.018 "name": "BaseBdev2", 00:23:50.018 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:50.018 "is_configured": true, 00:23:50.018 "data_offset": 256, 00:23:50.018 "data_size": 7936 00:23:50.018 } 00:23:50.018 ] 00:23:50.018 }' 00:23:50.018 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:50.018 04:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:50.582 "name": "raid_bdev1", 00:23:50.582 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:50.582 "strip_size_kb": 0, 00:23:50.582 "state": "online", 00:23:50.582 "raid_level": "raid1", 00:23:50.582 "superblock": true, 00:23:50.582 "num_base_bdevs": 2, 00:23:50.582 "num_base_bdevs_discovered": 1, 00:23:50.582 "num_base_bdevs_operational": 1, 00:23:50.582 "base_bdevs_list": [ 00:23:50.582 { 00:23:50.582 "name": null, 00:23:50.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.582 "is_configured": false, 00:23:50.582 "data_offset": 0, 00:23:50.582 "data_size": 7936 00:23:50.582 }, 00:23:50.582 { 00:23:50.582 "name": "BaseBdev2", 00:23:50.582 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:50.582 "is_configured": true, 00:23:50.582 "data_offset": 256, 00:23:50.582 "data_size": 7936 00:23:50.582 } 00:23:50.582 ] 00:23:50.582 }' 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:50.582 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:50.583 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.583 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:50.841 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.841 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:50.841 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.841 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:50.841 [2024-11-27 04:45:38.211509] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:50.841 [2024-11-27 04:45:38.211603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.841 [2024-11-27 04:45:38.211634] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:50.841 [2024-11-27 04:45:38.211648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.841 [2024-11-27 04:45:38.211976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.841 [2024-11-27 04:45:38.212000] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:50.841 [2024-11-27 04:45:38.212069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:50.841 [2024-11-27 04:45:38.212091] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:50.841 [2024-11-27 04:45:38.212105] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:50.841 [2024-11-27 04:45:38.212119] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:50.841 BaseBdev1 00:23:50.841 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.841 04:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.776 "name": "raid_bdev1", 00:23:51.776 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:51.776 "strip_size_kb": 0, 00:23:51.776 "state": "online", 00:23:51.776 "raid_level": "raid1", 00:23:51.776 "superblock": true, 00:23:51.776 "num_base_bdevs": 2, 00:23:51.776 "num_base_bdevs_discovered": 1, 00:23:51.776 "num_base_bdevs_operational": 1, 00:23:51.776 "base_bdevs_list": [ 00:23:51.776 { 00:23:51.776 "name": null, 00:23:51.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.776 "is_configured": false, 00:23:51.776 "data_offset": 0, 00:23:51.776 "data_size": 7936 00:23:51.776 }, 00:23:51.776 { 00:23:51.776 "name": "BaseBdev2", 00:23:51.776 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:51.776 "is_configured": true, 00:23:51.776 "data_offset": 256, 00:23:51.776 "data_size": 7936 00:23:51.776 } 00:23:51.776 ] 00:23:51.776 }' 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.776 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:52.400 "name": "raid_bdev1", 00:23:52.400 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:52.400 "strip_size_kb": 0, 00:23:52.400 "state": "online", 00:23:52.400 "raid_level": "raid1", 00:23:52.400 "superblock": true, 00:23:52.400 "num_base_bdevs": 2, 00:23:52.400 "num_base_bdevs_discovered": 1, 00:23:52.400 "num_base_bdevs_operational": 1, 00:23:52.400 "base_bdevs_list": [ 00:23:52.400 { 00:23:52.400 "name": null, 00:23:52.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.400 "is_configured": false, 00:23:52.400 "data_offset": 0, 00:23:52.400 "data_size": 7936 00:23:52.400 }, 00:23:52.400 { 00:23:52.400 "name": "BaseBdev2", 00:23:52.400 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:52.400 "is_configured": true, 00:23:52.400 "data_offset": 256, 00:23:52.400 "data_size": 7936 00:23:52.400 } 00:23:52.400 ] 00:23:52.400 }' 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.400 [2024-11-27 04:45:39.924132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:52.400 [2024-11-27 04:45:39.924377] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:52.400 [2024-11-27 04:45:39.924403] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:52.400 request: 00:23:52.400 { 00:23:52.400 "base_bdev": "BaseBdev1", 00:23:52.400 "raid_bdev": "raid_bdev1", 00:23:52.400 "method": "bdev_raid_add_base_bdev", 00:23:52.400 "req_id": 1 00:23:52.400 } 00:23:52.400 Got JSON-RPC error response 00:23:52.400 response: 00:23:52.400 { 00:23:52.400 "code": -22, 00:23:52.400 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:52.400 } 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.400 04:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.335 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:53.594 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.594 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.594 "name": "raid_bdev1", 00:23:53.594 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:53.594 "strip_size_kb": 0, 00:23:53.594 "state": "online", 00:23:53.594 "raid_level": "raid1", 00:23:53.594 "superblock": true, 00:23:53.594 "num_base_bdevs": 2, 00:23:53.594 "num_base_bdevs_discovered": 1, 00:23:53.594 "num_base_bdevs_operational": 1, 00:23:53.594 "base_bdevs_list": [ 00:23:53.594 { 00:23:53.594 "name": null, 00:23:53.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.594 "is_configured": false, 00:23:53.594 "data_offset": 0, 00:23:53.594 "data_size": 7936 00:23:53.594 }, 00:23:53.594 { 00:23:53.594 "name": "BaseBdev2", 00:23:53.594 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:53.594 "is_configured": true, 00:23:53.594 "data_offset": 256, 00:23:53.594 "data_size": 7936 00:23:53.594 } 00:23:53.594 ] 00:23:53.594 }' 00:23:53.594 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.594 04:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:54.162 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:54.162 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:54.163 "name": "raid_bdev1", 00:23:54.163 "uuid": "53a48d2c-2e82-4ed7-b38e-dcac783ba3f6", 00:23:54.163 "strip_size_kb": 0, 00:23:54.163 "state": "online", 00:23:54.163 "raid_level": "raid1", 00:23:54.163 "superblock": true, 00:23:54.163 "num_base_bdevs": 2, 00:23:54.163 "num_base_bdevs_discovered": 1, 00:23:54.163 "num_base_bdevs_operational": 1, 00:23:54.163 "base_bdevs_list": [ 00:23:54.163 { 00:23:54.163 "name": null, 00:23:54.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.163 "is_configured": false, 00:23:54.163 "data_offset": 0, 00:23:54.163 "data_size": 7936 00:23:54.163 }, 00:23:54.163 { 00:23:54.163 "name": "BaseBdev2", 00:23:54.163 "uuid": "04abed74-18f2-5b1e-8c8c-9ded42d6d66f", 00:23:54.163 "is_configured": true, 00:23:54.163 "data_offset": 256, 00:23:54.163 "data_size": 7936 00:23:54.163 } 00:23:54.163 ] 00:23:54.163 }' 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88379 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88379 ']' 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88379 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88379 00:23:54.163 killing process with pid 88379 00:23:54.163 Received shutdown signal, test time was about 60.000000 seconds 00:23:54.163 00:23:54.163 Latency(us) 00:23:54.163 [2024-11-27T04:45:41.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.163 [2024-11-27T04:45:41.786Z] =================================================================================================================== 00:23:54.163 [2024-11-27T04:45:41.786Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88379' 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88379 00:23:54.163 [2024-11-27 04:45:41.692676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:54.163 04:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88379 00:23:54.163 [2024-11-27 04:45:41.692853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:54.163 [2024-11-27 04:45:41.692922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:54.163 [2024-11-27 04:45:41.692942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:54.422 [2024-11-27 04:45:41.983995] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:55.796 04:45:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:23:55.796 00:23:55.796 real 0m21.881s 00:23:55.796 user 0m29.498s 00:23:55.796 sys 0m2.579s 00:23:55.796 04:45:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.796 04:45:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:55.796 ************************************ 00:23:55.796 END TEST raid_rebuild_test_sb_md_separate 00:23:55.796 ************************************ 00:23:55.796 04:45:43 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:23:55.796 04:45:43 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:23:55.796 04:45:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:55.796 04:45:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.796 04:45:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:55.796 ************************************ 00:23:55.796 START TEST raid_state_function_test_sb_md_interleaved 00:23:55.796 ************************************ 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:55.796 Process raid pid: 89079 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89079 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89079' 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89079 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89079 ']' 00:23:55.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.796 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.797 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.797 04:45:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.797 [2024-11-27 04:45:43.224871] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:55.797 [2024-11-27 04:45:43.225069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.797 [2024-11-27 04:45:43.409878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.054 [2024-11-27 04:45:43.545183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.312 [2024-11-27 04:45:43.757019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.312 [2024-11-27 04:45:43.757283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.879 [2024-11-27 04:45:44.248496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:56.879 [2024-11-27 04:45:44.248561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:56.879 [2024-11-27 04:45:44.248579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:56.879 [2024-11-27 04:45:44.248596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.879 "name": "Existed_Raid", 00:23:56.879 "uuid": "c75501a5-3f6c-4e61-bb15-123b23bfaafb", 00:23:56.879 "strip_size_kb": 0, 00:23:56.879 "state": "configuring", 00:23:56.879 "raid_level": "raid1", 00:23:56.879 "superblock": true, 00:23:56.879 "num_base_bdevs": 2, 00:23:56.879 "num_base_bdevs_discovered": 0, 00:23:56.879 "num_base_bdevs_operational": 2, 00:23:56.879 "base_bdevs_list": [ 00:23:56.879 { 00:23:56.879 "name": "BaseBdev1", 00:23:56.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.879 "is_configured": false, 00:23:56.879 "data_offset": 0, 00:23:56.879 "data_size": 0 00:23:56.879 }, 00:23:56.879 { 00:23:56.879 "name": "BaseBdev2", 00:23:56.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.879 "is_configured": false, 00:23:56.879 "data_offset": 0, 00:23:56.879 "data_size": 0 00:23:56.879 } 00:23:56.879 ] 00:23:56.879 }' 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.879 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.447 [2024-11-27 04:45:44.788573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:57.447 [2024-11-27 04:45:44.788614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.447 [2024-11-27 04:45:44.796574] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:57.447 [2024-11-27 04:45:44.796629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:57.447 [2024-11-27 04:45:44.796645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:57.447 [2024-11-27 04:45:44.796665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.447 [2024-11-27 04:45:44.847538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:57.447 BaseBdev1 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.447 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.447 [ 00:23:57.447 { 00:23:57.447 "name": "BaseBdev1", 00:23:57.447 "aliases": [ 00:23:57.447 "f288c786-6261-4ae7-a5a5-842bbc3df0b6" 00:23:57.447 ], 00:23:57.447 "product_name": "Malloc disk", 00:23:57.447 "block_size": 4128, 00:23:57.447 "num_blocks": 8192, 00:23:57.447 "uuid": "f288c786-6261-4ae7-a5a5-842bbc3df0b6", 00:23:57.447 "md_size": 32, 00:23:57.447 "md_interleave": true, 00:23:57.447 "dif_type": 0, 00:23:57.447 "assigned_rate_limits": { 00:23:57.447 "rw_ios_per_sec": 0, 00:23:57.447 "rw_mbytes_per_sec": 0, 00:23:57.447 "r_mbytes_per_sec": 0, 00:23:57.447 "w_mbytes_per_sec": 0 00:23:57.447 }, 00:23:57.447 "claimed": true, 00:23:57.447 "claim_type": "exclusive_write", 00:23:57.447 "zoned": false, 00:23:57.447 "supported_io_types": { 00:23:57.447 "read": true, 00:23:57.447 "write": true, 00:23:57.447 "unmap": true, 00:23:57.447 "flush": true, 00:23:57.447 "reset": true, 00:23:57.447 "nvme_admin": false, 00:23:57.447 "nvme_io": false, 00:23:57.447 "nvme_io_md": false, 00:23:57.447 "write_zeroes": true, 00:23:57.447 "zcopy": true, 00:23:57.447 "get_zone_info": false, 00:23:57.447 "zone_management": false, 00:23:57.447 "zone_append": false, 00:23:57.447 "compare": false, 00:23:57.447 "compare_and_write": false, 00:23:57.447 "abort": true, 00:23:57.447 "seek_hole": false, 00:23:57.447 "seek_data": false, 00:23:57.447 "copy": true, 00:23:57.447 "nvme_iov_md": false 00:23:57.447 }, 00:23:57.448 "memory_domains": [ 00:23:57.448 { 00:23:57.448 "dma_device_id": "system", 00:23:57.448 "dma_device_type": 1 00:23:57.448 }, 00:23:57.448 { 00:23:57.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.448 "dma_device_type": 2 00:23:57.448 } 00:23:57.448 ], 00:23:57.448 "driver_specific": {} 00:23:57.448 } 00:23:57.448 ] 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.448 "name": "Existed_Raid", 00:23:57.448 "uuid": "d14d4ab0-e537-4a9c-8a4a-fd5df556c716", 00:23:57.448 "strip_size_kb": 0, 00:23:57.448 "state": "configuring", 00:23:57.448 "raid_level": "raid1", 00:23:57.448 "superblock": true, 00:23:57.448 "num_base_bdevs": 2, 00:23:57.448 "num_base_bdevs_discovered": 1, 00:23:57.448 "num_base_bdevs_operational": 2, 00:23:57.448 "base_bdevs_list": [ 00:23:57.448 { 00:23:57.448 "name": "BaseBdev1", 00:23:57.448 "uuid": "f288c786-6261-4ae7-a5a5-842bbc3df0b6", 00:23:57.448 "is_configured": true, 00:23:57.448 "data_offset": 256, 00:23:57.448 "data_size": 7936 00:23:57.448 }, 00:23:57.448 { 00:23:57.448 "name": "BaseBdev2", 00:23:57.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.448 "is_configured": false, 00:23:57.448 "data_offset": 0, 00:23:57.448 "data_size": 0 00:23:57.448 } 00:23:57.448 ] 00:23:57.448 }' 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.448 04:45:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.014 [2024-11-27 04:45:45.407841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:58.014 [2024-11-27 04:45:45.408058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.014 [2024-11-27 04:45:45.415946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:58.014 [2024-11-27 04:45:45.418676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:58.014 [2024-11-27 04:45:45.418899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.014 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.014 "name": "Existed_Raid", 00:23:58.014 "uuid": "dd08d013-f7f1-4b70-9974-d2f87c4af9c8", 00:23:58.014 "strip_size_kb": 0, 00:23:58.014 "state": "configuring", 00:23:58.014 "raid_level": "raid1", 00:23:58.014 "superblock": true, 00:23:58.015 "num_base_bdevs": 2, 00:23:58.015 "num_base_bdevs_discovered": 1, 00:23:58.015 "num_base_bdevs_operational": 2, 00:23:58.015 "base_bdevs_list": [ 00:23:58.015 { 00:23:58.015 "name": "BaseBdev1", 00:23:58.015 "uuid": "f288c786-6261-4ae7-a5a5-842bbc3df0b6", 00:23:58.015 "is_configured": true, 00:23:58.015 "data_offset": 256, 00:23:58.015 "data_size": 7936 00:23:58.015 }, 00:23:58.015 { 00:23:58.015 "name": "BaseBdev2", 00:23:58.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.015 "is_configured": false, 00:23:58.015 "data_offset": 0, 00:23:58.015 "data_size": 0 00:23:58.015 } 00:23:58.015 ] 00:23:58.015 }' 00:23:58.015 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.015 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.582 [2024-11-27 04:45:45.983025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:58.582 [2024-11-27 04:45:45.983513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:58.582 BaseBdev2 00:23:58.582 [2024-11-27 04:45:45.983687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:58.582 [2024-11-27 04:45:45.983862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:58.582 [2024-11-27 04:45:45.983974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:58.582 [2024-11-27 04:45:45.983995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:58.582 [2024-11-27 04:45:45.984083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.582 04:45:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.582 [ 00:23:58.582 { 00:23:58.582 "name": "BaseBdev2", 00:23:58.582 "aliases": [ 00:23:58.582 "d368a766-1df1-4d71-872c-c0b351472f9e" 00:23:58.582 ], 00:23:58.582 "product_name": "Malloc disk", 00:23:58.582 "block_size": 4128, 00:23:58.582 "num_blocks": 8192, 00:23:58.582 "uuid": "d368a766-1df1-4d71-872c-c0b351472f9e", 00:23:58.582 "md_size": 32, 00:23:58.582 "md_interleave": true, 00:23:58.582 "dif_type": 0, 00:23:58.582 "assigned_rate_limits": { 00:23:58.582 "rw_ios_per_sec": 0, 00:23:58.582 "rw_mbytes_per_sec": 0, 00:23:58.582 "r_mbytes_per_sec": 0, 00:23:58.582 "w_mbytes_per_sec": 0 00:23:58.582 }, 00:23:58.582 "claimed": true, 00:23:58.582 "claim_type": "exclusive_write", 00:23:58.582 "zoned": false, 00:23:58.582 "supported_io_types": { 00:23:58.582 "read": true, 00:23:58.582 "write": true, 00:23:58.582 "unmap": true, 00:23:58.582 "flush": true, 00:23:58.582 "reset": true, 00:23:58.582 "nvme_admin": false, 00:23:58.582 "nvme_io": false, 00:23:58.582 "nvme_io_md": false, 00:23:58.582 "write_zeroes": true, 00:23:58.582 "zcopy": true, 00:23:58.582 "get_zone_info": false, 00:23:58.582 "zone_management": false, 00:23:58.582 "zone_append": false, 00:23:58.582 "compare": false, 00:23:58.582 "compare_and_write": false, 00:23:58.582 "abort": true, 00:23:58.582 "seek_hole": false, 00:23:58.582 "seek_data": false, 00:23:58.582 "copy": true, 00:23:58.582 "nvme_iov_md": false 00:23:58.582 }, 00:23:58.582 "memory_domains": [ 00:23:58.582 { 00:23:58.582 "dma_device_id": "system", 00:23:58.582 "dma_device_type": 1 00:23:58.582 }, 00:23:58.582 { 00:23:58.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.582 "dma_device_type": 2 00:23:58.582 } 00:23:58.582 ], 00:23:58.582 "driver_specific": {} 00:23:58.582 } 00:23:58.582 ] 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.582 "name": "Existed_Raid", 00:23:58.582 "uuid": "dd08d013-f7f1-4b70-9974-d2f87c4af9c8", 00:23:58.582 "strip_size_kb": 0, 00:23:58.582 "state": "online", 00:23:58.582 "raid_level": "raid1", 00:23:58.582 "superblock": true, 00:23:58.582 "num_base_bdevs": 2, 00:23:58.582 "num_base_bdevs_discovered": 2, 00:23:58.582 "num_base_bdevs_operational": 2, 00:23:58.582 "base_bdevs_list": [ 00:23:58.582 { 00:23:58.582 "name": "BaseBdev1", 00:23:58.582 "uuid": "f288c786-6261-4ae7-a5a5-842bbc3df0b6", 00:23:58.582 "is_configured": true, 00:23:58.582 "data_offset": 256, 00:23:58.582 "data_size": 7936 00:23:58.582 }, 00:23:58.582 { 00:23:58.582 "name": "BaseBdev2", 00:23:58.582 "uuid": "d368a766-1df1-4d71-872c-c0b351472f9e", 00:23:58.582 "is_configured": true, 00:23:58.582 "data_offset": 256, 00:23:58.582 "data_size": 7936 00:23:58.582 } 00:23:58.582 ] 00:23:58.582 }' 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.582 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.149 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:59.149 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:59.149 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:59.149 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:59.149 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:59.149 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.150 [2024-11-27 04:45:46.595675] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:59.150 "name": "Existed_Raid", 00:23:59.150 "aliases": [ 00:23:59.150 "dd08d013-f7f1-4b70-9974-d2f87c4af9c8" 00:23:59.150 ], 00:23:59.150 "product_name": "Raid Volume", 00:23:59.150 "block_size": 4128, 00:23:59.150 "num_blocks": 7936, 00:23:59.150 "uuid": "dd08d013-f7f1-4b70-9974-d2f87c4af9c8", 00:23:59.150 "md_size": 32, 00:23:59.150 "md_interleave": true, 00:23:59.150 "dif_type": 0, 00:23:59.150 "assigned_rate_limits": { 00:23:59.150 "rw_ios_per_sec": 0, 00:23:59.150 "rw_mbytes_per_sec": 0, 00:23:59.150 "r_mbytes_per_sec": 0, 00:23:59.150 "w_mbytes_per_sec": 0 00:23:59.150 }, 00:23:59.150 "claimed": false, 00:23:59.150 "zoned": false, 00:23:59.150 "supported_io_types": { 00:23:59.150 "read": true, 00:23:59.150 "write": true, 00:23:59.150 "unmap": false, 00:23:59.150 "flush": false, 00:23:59.150 "reset": true, 00:23:59.150 "nvme_admin": false, 00:23:59.150 "nvme_io": false, 00:23:59.150 "nvme_io_md": false, 00:23:59.150 "write_zeroes": true, 00:23:59.150 "zcopy": false, 00:23:59.150 "get_zone_info": false, 00:23:59.150 "zone_management": false, 00:23:59.150 "zone_append": false, 00:23:59.150 "compare": false, 00:23:59.150 "compare_and_write": false, 00:23:59.150 "abort": false, 00:23:59.150 "seek_hole": false, 00:23:59.150 "seek_data": false, 00:23:59.150 "copy": false, 00:23:59.150 "nvme_iov_md": false 00:23:59.150 }, 00:23:59.150 "memory_domains": [ 00:23:59.150 { 00:23:59.150 "dma_device_id": "system", 00:23:59.150 "dma_device_type": 1 00:23:59.150 }, 00:23:59.150 { 00:23:59.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.150 "dma_device_type": 2 00:23:59.150 }, 00:23:59.150 { 00:23:59.150 "dma_device_id": "system", 00:23:59.150 "dma_device_type": 1 00:23:59.150 }, 00:23:59.150 { 00:23:59.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.150 "dma_device_type": 2 00:23:59.150 } 00:23:59.150 ], 00:23:59.150 "driver_specific": { 00:23:59.150 "raid": { 00:23:59.150 "uuid": "dd08d013-f7f1-4b70-9974-d2f87c4af9c8", 00:23:59.150 "strip_size_kb": 0, 00:23:59.150 "state": "online", 00:23:59.150 "raid_level": "raid1", 00:23:59.150 "superblock": true, 00:23:59.150 "num_base_bdevs": 2, 00:23:59.150 "num_base_bdevs_discovered": 2, 00:23:59.150 "num_base_bdevs_operational": 2, 00:23:59.150 "base_bdevs_list": [ 00:23:59.150 { 00:23:59.150 "name": "BaseBdev1", 00:23:59.150 "uuid": "f288c786-6261-4ae7-a5a5-842bbc3df0b6", 00:23:59.150 "is_configured": true, 00:23:59.150 "data_offset": 256, 00:23:59.150 "data_size": 7936 00:23:59.150 }, 00:23:59.150 { 00:23:59.150 "name": "BaseBdev2", 00:23:59.150 "uuid": "d368a766-1df1-4d71-872c-c0b351472f9e", 00:23:59.150 "is_configured": true, 00:23:59.150 "data_offset": 256, 00:23:59.150 "data_size": 7936 00:23:59.150 } 00:23:59.150 ] 00:23:59.150 } 00:23:59.150 } 00:23:59.150 }' 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:59.150 BaseBdev2' 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.150 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.409 [2024-11-27 04:45:46.879438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.409 04:45:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.409 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.409 "name": "Existed_Raid", 00:23:59.409 "uuid": "dd08d013-f7f1-4b70-9974-d2f87c4af9c8", 00:23:59.409 "strip_size_kb": 0, 00:23:59.409 "state": "online", 00:23:59.409 "raid_level": "raid1", 00:23:59.409 "superblock": true, 00:23:59.409 "num_base_bdevs": 2, 00:23:59.409 "num_base_bdevs_discovered": 1, 00:23:59.409 "num_base_bdevs_operational": 1, 00:23:59.409 "base_bdevs_list": [ 00:23:59.409 { 00:23:59.409 "name": null, 00:23:59.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.409 "is_configured": false, 00:23:59.409 "data_offset": 0, 00:23:59.409 "data_size": 7936 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "name": "BaseBdev2", 00:23:59.409 "uuid": "d368a766-1df1-4d71-872c-c0b351472f9e", 00:23:59.409 "is_configured": true, 00:23:59.409 "data_offset": 256, 00:23:59.409 "data_size": 7936 00:23:59.409 } 00:23:59.409 ] 00:23:59.409 }' 00:23:59.409 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.668 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.926 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:59.926 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:59.926 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:59.926 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.926 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.926 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.926 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:00.185 [2024-11-27 04:45:47.563996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:00.185 [2024-11-27 04:45:47.564160] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:00.185 [2024-11-27 04:45:47.652641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:00.185 [2024-11-27 04:45:47.652725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:00.185 [2024-11-27 04:45:47.652761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89079 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89079 ']' 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89079 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89079 00:24:00.185 killing process with pid 89079 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89079' 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89079 00:24:00.185 [2024-11-27 04:45:47.746624] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:00.185 04:45:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89079 00:24:00.185 [2024-11-27 04:45:47.762928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:01.559 04:45:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:24:01.559 00:24:01.559 real 0m5.762s 00:24:01.559 user 0m8.671s 00:24:01.559 sys 0m0.878s 00:24:01.559 04:45:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.559 04:45:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.559 ************************************ 00:24:01.559 END TEST raid_state_function_test_sb_md_interleaved 00:24:01.559 ************************************ 00:24:01.559 04:45:48 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:24:01.559 04:45:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:01.559 04:45:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.559 04:45:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:01.559 ************************************ 00:24:01.559 START TEST raid_superblock_test_md_interleaved 00:24:01.559 ************************************ 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89337 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89337 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89337 ']' 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.559 04:45:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.559 [2024-11-27 04:45:49.023829] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:01.559 [2024-11-27 04:45:49.024568] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89337 ] 00:24:01.816 [2024-11-27 04:45:49.200000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.816 [2024-11-27 04:45:49.332400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.074 [2024-11-27 04:45:49.545301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:02.075 [2024-11-27 04:45:49.545356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.643 malloc1 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.643 [2024-11-27 04:45:50.068883] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:02.643 [2024-11-27 04:45:50.068979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.643 [2024-11-27 04:45:50.069012] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:02.643 [2024-11-27 04:45:50.069027] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.643 [2024-11-27 04:45:50.071777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.643 [2024-11-27 04:45:50.072067] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:02.643 pt1 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.643 malloc2 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.643 [2024-11-27 04:45:50.129920] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:02.643 [2024-11-27 04:45:50.130011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.643 [2024-11-27 04:45:50.130043] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:02.643 [2024-11-27 04:45:50.130058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.643 [2024-11-27 04:45:50.132660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.643 [2024-11-27 04:45:50.132719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:02.643 pt2 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.643 [2024-11-27 04:45:50.141943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:02.643 [2024-11-27 04:45:50.144555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:02.643 [2024-11-27 04:45:50.144864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:02.643 [2024-11-27 04:45:50.144885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:02.643 [2024-11-27 04:45:50.144987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:02.643 [2024-11-27 04:45:50.145089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:02.643 [2024-11-27 04:45:50.145108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:02.643 [2024-11-27 04:45:50.145209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.643 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:02.643 "name": "raid_bdev1", 00:24:02.643 "uuid": "422811af-9e37-4b6b-a28d-f823fca25ccd", 00:24:02.643 "strip_size_kb": 0, 00:24:02.643 "state": "online", 00:24:02.643 "raid_level": "raid1", 00:24:02.643 "superblock": true, 00:24:02.643 "num_base_bdevs": 2, 00:24:02.643 "num_base_bdevs_discovered": 2, 00:24:02.643 "num_base_bdevs_operational": 2, 00:24:02.643 "base_bdevs_list": [ 00:24:02.643 { 00:24:02.643 "name": "pt1", 00:24:02.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:02.643 "is_configured": true, 00:24:02.643 "data_offset": 256, 00:24:02.643 "data_size": 7936 00:24:02.643 }, 00:24:02.643 { 00:24:02.643 "name": "pt2", 00:24:02.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:02.644 "is_configured": true, 00:24:02.644 "data_offset": 256, 00:24:02.644 "data_size": 7936 00:24:02.644 } 00:24:02.644 ] 00:24:02.644 }' 00:24:02.644 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:02.644 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:03.211 [2024-11-27 04:45:50.670530] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.211 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:03.211 "name": "raid_bdev1", 00:24:03.211 "aliases": [ 00:24:03.211 "422811af-9e37-4b6b-a28d-f823fca25ccd" 00:24:03.211 ], 00:24:03.211 "product_name": "Raid Volume", 00:24:03.211 "block_size": 4128, 00:24:03.211 "num_blocks": 7936, 00:24:03.212 "uuid": "422811af-9e37-4b6b-a28d-f823fca25ccd", 00:24:03.212 "md_size": 32, 00:24:03.212 "md_interleave": true, 00:24:03.212 "dif_type": 0, 00:24:03.212 "assigned_rate_limits": { 00:24:03.212 "rw_ios_per_sec": 0, 00:24:03.212 "rw_mbytes_per_sec": 0, 00:24:03.212 "r_mbytes_per_sec": 0, 00:24:03.212 "w_mbytes_per_sec": 0 00:24:03.212 }, 00:24:03.212 "claimed": false, 00:24:03.212 "zoned": false, 00:24:03.212 "supported_io_types": { 00:24:03.212 "read": true, 00:24:03.212 "write": true, 00:24:03.212 "unmap": false, 00:24:03.212 "flush": false, 00:24:03.212 "reset": true, 00:24:03.212 "nvme_admin": false, 00:24:03.212 "nvme_io": false, 00:24:03.212 "nvme_io_md": false, 00:24:03.212 "write_zeroes": true, 00:24:03.212 "zcopy": false, 00:24:03.212 "get_zone_info": false, 00:24:03.212 "zone_management": false, 00:24:03.212 "zone_append": false, 00:24:03.212 "compare": false, 00:24:03.212 "compare_and_write": false, 00:24:03.212 "abort": false, 00:24:03.212 "seek_hole": false, 00:24:03.212 "seek_data": false, 00:24:03.212 "copy": false, 00:24:03.212 "nvme_iov_md": false 00:24:03.212 }, 00:24:03.212 "memory_domains": [ 00:24:03.212 { 00:24:03.212 "dma_device_id": "system", 00:24:03.212 "dma_device_type": 1 00:24:03.212 }, 00:24:03.212 { 00:24:03.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.212 "dma_device_type": 2 00:24:03.212 }, 00:24:03.212 { 00:24:03.212 "dma_device_id": "system", 00:24:03.212 "dma_device_type": 1 00:24:03.212 }, 00:24:03.212 { 00:24:03.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.212 "dma_device_type": 2 00:24:03.212 } 00:24:03.212 ], 00:24:03.212 "driver_specific": { 00:24:03.212 "raid": { 00:24:03.212 "uuid": "422811af-9e37-4b6b-a28d-f823fca25ccd", 00:24:03.212 "strip_size_kb": 0, 00:24:03.212 "state": "online", 00:24:03.212 "raid_level": "raid1", 00:24:03.212 "superblock": true, 00:24:03.212 "num_base_bdevs": 2, 00:24:03.212 "num_base_bdevs_discovered": 2, 00:24:03.212 "num_base_bdevs_operational": 2, 00:24:03.212 "base_bdevs_list": [ 00:24:03.212 { 00:24:03.212 "name": "pt1", 00:24:03.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:03.212 "is_configured": true, 00:24:03.212 "data_offset": 256, 00:24:03.212 "data_size": 7936 00:24:03.212 }, 00:24:03.212 { 00:24:03.212 "name": "pt2", 00:24:03.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:03.212 "is_configured": true, 00:24:03.212 "data_offset": 256, 00:24:03.212 "data_size": 7936 00:24:03.212 } 00:24:03.212 ] 00:24:03.212 } 00:24:03.212 } 00:24:03.212 }' 00:24:03.212 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:03.212 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:03.212 pt2' 00:24:03.212 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:03.212 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:24:03.212 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:03.212 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:03.212 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:03.212 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.212 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:24:03.471 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.472 [2024-11-27 04:45:50.934479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=422811af-9e37-4b6b-a28d-f823fca25ccd 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 422811af-9e37-4b6b-a28d-f823fca25ccd ']' 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.472 [2024-11-27 04:45:50.986096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:03.472 [2024-11-27 04:45:50.986142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:03.472 [2024-11-27 04:45:50.986264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:03.472 [2024-11-27 04:45:50.986383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:03.472 [2024-11-27 04:45:50.986407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.472 04:45:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.472 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.731 [2024-11-27 04:45:51.126185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:03.731 [2024-11-27 04:45:51.128882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:03.731 [2024-11-27 04:45:51.129105] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:03.731 [2024-11-27 04:45:51.129316] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:03.731 [2024-11-27 04:45:51.129450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:03.731 [2024-11-27 04:45:51.129475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:03.731 request: 00:24:03.731 { 00:24:03.731 "name": "raid_bdev1", 00:24:03.731 "raid_level": "raid1", 00:24:03.731 "base_bdevs": [ 00:24:03.731 "malloc1", 00:24:03.731 "malloc2" 00:24:03.731 ], 00:24:03.731 "superblock": false, 00:24:03.731 "method": "bdev_raid_create", 00:24:03.731 "req_id": 1 00:24:03.731 } 00:24:03.731 Got JSON-RPC error response 00:24:03.731 response: 00:24:03.731 { 00:24:03.731 "code": -17, 00:24:03.731 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:03.731 } 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.731 [2024-11-27 04:45:51.194307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:03.731 [2024-11-27 04:45:51.194504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.731 [2024-11-27 04:45:51.194573] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:03.731 [2024-11-27 04:45:51.194792] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.731 [2024-11-27 04:45:51.197447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.731 [2024-11-27 04:45:51.197619] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:03.731 [2024-11-27 04:45:51.197819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:03.731 [2024-11-27 04:45:51.198016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:03.731 pt1 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.731 "name": "raid_bdev1", 00:24:03.731 "uuid": "422811af-9e37-4b6b-a28d-f823fca25ccd", 00:24:03.731 "strip_size_kb": 0, 00:24:03.731 "state": "configuring", 00:24:03.731 "raid_level": "raid1", 00:24:03.731 "superblock": true, 00:24:03.731 "num_base_bdevs": 2, 00:24:03.731 "num_base_bdevs_discovered": 1, 00:24:03.731 "num_base_bdevs_operational": 2, 00:24:03.731 "base_bdevs_list": [ 00:24:03.731 { 00:24:03.731 "name": "pt1", 00:24:03.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:03.731 "is_configured": true, 00:24:03.731 "data_offset": 256, 00:24:03.731 "data_size": 7936 00:24:03.731 }, 00:24:03.731 { 00:24:03.731 "name": null, 00:24:03.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:03.731 "is_configured": false, 00:24:03.731 "data_offset": 256, 00:24:03.731 "data_size": 7936 00:24:03.731 } 00:24:03.731 ] 00:24:03.731 }' 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.731 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.299 [2024-11-27 04:45:51.738541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:04.299 [2024-11-27 04:45:51.738670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.299 [2024-11-27 04:45:51.738715] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:04.299 [2024-11-27 04:45:51.738734] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.299 [2024-11-27 04:45:51.738994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.299 [2024-11-27 04:45:51.739027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:04.299 [2024-11-27 04:45:51.739099] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:04.299 [2024-11-27 04:45:51.739151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:04.299 [2024-11-27 04:45:51.739287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:04.299 [2024-11-27 04:45:51.739323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:04.299 [2024-11-27 04:45:51.739425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:04.299 [2024-11-27 04:45:51.739516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:04.299 [2024-11-27 04:45:51.739529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:04.299 [2024-11-27 04:45:51.739631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.299 pt2 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:04.299 "name": "raid_bdev1", 00:24:04.299 "uuid": "422811af-9e37-4b6b-a28d-f823fca25ccd", 00:24:04.299 "strip_size_kb": 0, 00:24:04.299 "state": "online", 00:24:04.299 "raid_level": "raid1", 00:24:04.299 "superblock": true, 00:24:04.299 "num_base_bdevs": 2, 00:24:04.299 "num_base_bdevs_discovered": 2, 00:24:04.299 "num_base_bdevs_operational": 2, 00:24:04.299 "base_bdevs_list": [ 00:24:04.299 { 00:24:04.299 "name": "pt1", 00:24:04.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:04.299 "is_configured": true, 00:24:04.299 "data_offset": 256, 00:24:04.299 "data_size": 7936 00:24:04.299 }, 00:24:04.299 { 00:24:04.299 "name": "pt2", 00:24:04.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:04.299 "is_configured": true, 00:24:04.299 "data_offset": 256, 00:24:04.299 "data_size": 7936 00:24:04.299 } 00:24:04.299 ] 00:24:04.299 }' 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:04.299 04:45:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.913 [2024-11-27 04:45:52.299073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:04.913 "name": "raid_bdev1", 00:24:04.913 "aliases": [ 00:24:04.913 "422811af-9e37-4b6b-a28d-f823fca25ccd" 00:24:04.913 ], 00:24:04.913 "product_name": "Raid Volume", 00:24:04.913 "block_size": 4128, 00:24:04.913 "num_blocks": 7936, 00:24:04.913 "uuid": "422811af-9e37-4b6b-a28d-f823fca25ccd", 00:24:04.913 "md_size": 32, 00:24:04.913 "md_interleave": true, 00:24:04.913 "dif_type": 0, 00:24:04.913 "assigned_rate_limits": { 00:24:04.913 "rw_ios_per_sec": 0, 00:24:04.913 "rw_mbytes_per_sec": 0, 00:24:04.913 "r_mbytes_per_sec": 0, 00:24:04.913 "w_mbytes_per_sec": 0 00:24:04.913 }, 00:24:04.913 "claimed": false, 00:24:04.913 "zoned": false, 00:24:04.913 "supported_io_types": { 00:24:04.913 "read": true, 00:24:04.913 "write": true, 00:24:04.913 "unmap": false, 00:24:04.913 "flush": false, 00:24:04.913 "reset": true, 00:24:04.913 "nvme_admin": false, 00:24:04.913 "nvme_io": false, 00:24:04.913 "nvme_io_md": false, 00:24:04.913 "write_zeroes": true, 00:24:04.913 "zcopy": false, 00:24:04.913 "get_zone_info": false, 00:24:04.913 "zone_management": false, 00:24:04.913 "zone_append": false, 00:24:04.913 "compare": false, 00:24:04.913 "compare_and_write": false, 00:24:04.913 "abort": false, 00:24:04.913 "seek_hole": false, 00:24:04.913 "seek_data": false, 00:24:04.913 "copy": false, 00:24:04.913 "nvme_iov_md": false 00:24:04.913 }, 00:24:04.913 "memory_domains": [ 00:24:04.913 { 00:24:04.913 "dma_device_id": "system", 00:24:04.913 "dma_device_type": 1 00:24:04.913 }, 00:24:04.913 { 00:24:04.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.913 "dma_device_type": 2 00:24:04.913 }, 00:24:04.913 { 00:24:04.913 "dma_device_id": "system", 00:24:04.913 "dma_device_type": 1 00:24:04.913 }, 00:24:04.913 { 00:24:04.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.913 "dma_device_type": 2 00:24:04.913 } 00:24:04.913 ], 00:24:04.913 "driver_specific": { 00:24:04.913 "raid": { 00:24:04.913 "uuid": "422811af-9e37-4b6b-a28d-f823fca25ccd", 00:24:04.913 "strip_size_kb": 0, 00:24:04.913 "state": "online", 00:24:04.913 "raid_level": "raid1", 00:24:04.913 "superblock": true, 00:24:04.913 "num_base_bdevs": 2, 00:24:04.913 "num_base_bdevs_discovered": 2, 00:24:04.913 "num_base_bdevs_operational": 2, 00:24:04.913 "base_bdevs_list": [ 00:24:04.913 { 00:24:04.913 "name": "pt1", 00:24:04.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:04.913 "is_configured": true, 00:24:04.913 "data_offset": 256, 00:24:04.913 "data_size": 7936 00:24:04.913 }, 00:24:04.913 { 00:24:04.913 "name": "pt2", 00:24:04.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:04.913 "is_configured": true, 00:24:04.913 "data_offset": 256, 00:24:04.913 "data_size": 7936 00:24:04.913 } 00:24:04.913 ] 00:24:04.913 } 00:24:04.913 } 00:24:04.913 }' 00:24:04.913 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:04.914 pt2' 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:04.914 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.173 [2024-11-27 04:45:52.579165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 422811af-9e37-4b6b-a28d-f823fca25ccd '!=' 422811af-9e37-4b6b-a28d-f823fca25ccd ']' 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.173 [2024-11-27 04:45:52.634896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.173 "name": "raid_bdev1", 00:24:05.173 "uuid": "422811af-9e37-4b6b-a28d-f823fca25ccd", 00:24:05.173 "strip_size_kb": 0, 00:24:05.173 "state": "online", 00:24:05.173 "raid_level": "raid1", 00:24:05.173 "superblock": true, 00:24:05.173 "num_base_bdevs": 2, 00:24:05.173 "num_base_bdevs_discovered": 1, 00:24:05.173 "num_base_bdevs_operational": 1, 00:24:05.173 "base_bdevs_list": [ 00:24:05.173 { 00:24:05.173 "name": null, 00:24:05.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.173 "is_configured": false, 00:24:05.173 "data_offset": 0, 00:24:05.173 "data_size": 7936 00:24:05.173 }, 00:24:05.173 { 00:24:05.173 "name": "pt2", 00:24:05.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:05.173 "is_configured": true, 00:24:05.173 "data_offset": 256, 00:24:05.173 "data_size": 7936 00:24:05.173 } 00:24:05.173 ] 00:24:05.173 }' 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.173 04:45:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.741 [2024-11-27 04:45:53.191164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:05.741 [2024-11-27 04:45:53.191719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:05.741 [2024-11-27 04:45:53.191892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:05.741 [2024-11-27 04:45:53.192007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:05.741 [2024-11-27 04:45:53.192028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.741 [2024-11-27 04:45:53.271094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:05.741 [2024-11-27 04:45:53.271314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.741 [2024-11-27 04:45:53.271348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:05.741 [2024-11-27 04:45:53.271365] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.741 [2024-11-27 04:45:53.274432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.741 [2024-11-27 04:45:53.274606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:05.741 [2024-11-27 04:45:53.274690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:05.741 [2024-11-27 04:45:53.274758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:05.741 [2024-11-27 04:45:53.274889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:05.741 [2024-11-27 04:45:53.274953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:05.741 [2024-11-27 04:45:53.275062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:05.741 [2024-11-27 04:45:53.275180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:05.741 [2024-11-27 04:45:53.275193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:05.741 [2024-11-27 04:45:53.275328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.741 pt2 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.741 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.742 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.742 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.742 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.742 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.742 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.742 "name": "raid_bdev1", 00:24:05.742 "uuid": "422811af-9e37-4b6b-a28d-f823fca25ccd", 00:24:05.742 "strip_size_kb": 0, 00:24:05.742 "state": "online", 00:24:05.742 "raid_level": "raid1", 00:24:05.742 "superblock": true, 00:24:05.742 "num_base_bdevs": 2, 00:24:05.742 "num_base_bdevs_discovered": 1, 00:24:05.742 "num_base_bdevs_operational": 1, 00:24:05.742 "base_bdevs_list": [ 00:24:05.742 { 00:24:05.742 "name": null, 00:24:05.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.742 "is_configured": false, 00:24:05.742 "data_offset": 256, 00:24:05.742 "data_size": 7936 00:24:05.742 }, 00:24:05.742 { 00:24:05.742 "name": "pt2", 00:24:05.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:05.742 "is_configured": true, 00:24:05.742 "data_offset": 256, 00:24:05.742 "data_size": 7936 00:24:05.742 } 00:24:05.742 ] 00:24:05.742 }' 00:24:05.742 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.742 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.309 [2024-11-27 04:45:53.823562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:06.309 [2024-11-27 04:45:53.823625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:06.309 [2024-11-27 04:45:53.823741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:06.309 [2024-11-27 04:45:53.823863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:06.309 [2024-11-27 04:45:53.823882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.309 [2024-11-27 04:45:53.891636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:06.309 [2024-11-27 04:45:53.891896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:06.309 [2024-11-27 04:45:53.891953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:06.309 [2024-11-27 04:45:53.891977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:06.309 [2024-11-27 04:45:53.894809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:06.309 [2024-11-27 04:45:53.894877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:06.309 [2024-11-27 04:45:53.894993] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:06.309 [2024-11-27 04:45:53.895061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:06.309 [2024-11-27 04:45:53.895218] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:06.309 [2024-11-27 04:45:53.895251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:06.309 [2024-11-27 04:45:53.895278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:06.309 [2024-11-27 04:45:53.895359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:06.309 [2024-11-27 04:45:53.895467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:06.309 [2024-11-27 04:45:53.895482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:06.309 [2024-11-27 04:45:53.895569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:06.309 [2024-11-27 04:45:53.895661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:06.309 [2024-11-27 04:45:53.895679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:06.309 [2024-11-27 04:45:53.895844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.309 pt1 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.309 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.568 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:06.568 "name": "raid_bdev1", 00:24:06.568 "uuid": "422811af-9e37-4b6b-a28d-f823fca25ccd", 00:24:06.568 "strip_size_kb": 0, 00:24:06.568 "state": "online", 00:24:06.568 "raid_level": "raid1", 00:24:06.568 "superblock": true, 00:24:06.568 "num_base_bdevs": 2, 00:24:06.568 "num_base_bdevs_discovered": 1, 00:24:06.568 "num_base_bdevs_operational": 1, 00:24:06.568 "base_bdevs_list": [ 00:24:06.568 { 00:24:06.568 "name": null, 00:24:06.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.568 "is_configured": false, 00:24:06.568 "data_offset": 256, 00:24:06.568 "data_size": 7936 00:24:06.568 }, 00:24:06.568 { 00:24:06.568 "name": "pt2", 00:24:06.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:06.568 "is_configured": true, 00:24:06.568 "data_offset": 256, 00:24:06.568 "data_size": 7936 00:24:06.568 } 00:24:06.568 ] 00:24:06.568 }' 00:24:06.568 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:06.568 04:45:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.828 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:06.828 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:06.828 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.828 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.828 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.087 [2024-11-27 04:45:54.484495] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 422811af-9e37-4b6b-a28d-f823fca25ccd '!=' 422811af-9e37-4b6b-a28d-f823fca25ccd ']' 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89337 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89337 ']' 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89337 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89337 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.087 killing process with pid 89337 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89337' 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89337 00:24:07.087 [2024-11-27 04:45:54.573738] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:07.087 04:45:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89337 00:24:07.087 [2024-11-27 04:45:54.573877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:07.087 [2024-11-27 04:45:54.573950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:07.087 [2024-11-27 04:45:54.573986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:07.346 [2024-11-27 04:45:54.758852] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:08.284 04:45:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:24:08.284 00:24:08.284 real 0m6.886s 00:24:08.284 user 0m10.963s 00:24:08.284 sys 0m0.972s 00:24:08.284 04:45:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.284 04:45:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:08.284 ************************************ 00:24:08.285 END TEST raid_superblock_test_md_interleaved 00:24:08.285 ************************************ 00:24:08.285 04:45:55 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:24:08.285 04:45:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:08.285 04:45:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.285 04:45:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:08.285 ************************************ 00:24:08.285 START TEST raid_rebuild_test_sb_md_interleaved 00:24:08.285 ************************************ 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89671 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89671 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89671 ']' 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:08.285 04:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:08.544 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:08.544 Zero copy mechanism will not be used. 00:24:08.544 [2024-11-27 04:45:55.968452] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:08.544 [2024-11-27 04:45:55.968619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89671 ] 00:24:08.544 [2024-11-27 04:45:56.143114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.803 [2024-11-27 04:45:56.280750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.062 [2024-11-27 04:45:56.486980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:09.062 [2024-11-27 04:45:56.487296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:09.630 04:45:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.630 04:45:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:24:09.630 04:45:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:09.630 04:45:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:24:09.630 04:45:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.630 04:45:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.630 BaseBdev1_malloc 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.630 [2024-11-27 04:45:57.015367] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:09.630 [2024-11-27 04:45:57.015443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.630 [2024-11-27 04:45:57.015477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:09.630 [2024-11-27 04:45:57.015498] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.630 [2024-11-27 04:45:57.018178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.630 [2024-11-27 04:45:57.018371] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:09.630 BaseBdev1 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.630 BaseBdev2_malloc 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.630 [2024-11-27 04:45:57.072520] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:09.630 [2024-11-27 04:45:57.072599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.630 [2024-11-27 04:45:57.072639] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:09.630 [2024-11-27 04:45:57.072658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.630 [2024-11-27 04:45:57.075410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.630 [2024-11-27 04:45:57.075592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:09.630 BaseBdev2 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.630 spare_malloc 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.630 spare_delay 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.630 [2024-11-27 04:45:57.143879] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:09.630 [2024-11-27 04:45:57.144153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.630 [2024-11-27 04:45:57.144195] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:09.630 [2024-11-27 04:45:57.144216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.630 [2024-11-27 04:45:57.147034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.630 [2024-11-27 04:45:57.147098] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:09.630 spare 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.630 [2024-11-27 04:45:57.151958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:09.630 [2024-11-27 04:45:57.154575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:09.630 [2024-11-27 04:45:57.154908] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:09.630 [2024-11-27 04:45:57.154933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:09.630 [2024-11-27 04:45:57.155034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:09.630 [2024-11-27 04:45:57.155144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:09.630 [2024-11-27 04:45:57.155159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:09.630 [2024-11-27 04:45:57.155244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.630 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:09.630 "name": "raid_bdev1", 00:24:09.630 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:09.630 "strip_size_kb": 0, 00:24:09.631 "state": "online", 00:24:09.631 "raid_level": "raid1", 00:24:09.631 "superblock": true, 00:24:09.631 "num_base_bdevs": 2, 00:24:09.631 "num_base_bdevs_discovered": 2, 00:24:09.631 "num_base_bdevs_operational": 2, 00:24:09.631 "base_bdevs_list": [ 00:24:09.631 { 00:24:09.631 "name": "BaseBdev1", 00:24:09.631 "uuid": "643f6d61-d945-5e99-8b54-7ed8fb752e34", 00:24:09.631 "is_configured": true, 00:24:09.631 "data_offset": 256, 00:24:09.631 "data_size": 7936 00:24:09.631 }, 00:24:09.631 { 00:24:09.631 "name": "BaseBdev2", 00:24:09.631 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:09.631 "is_configured": true, 00:24:09.631 "data_offset": 256, 00:24:09.631 "data_size": 7936 00:24:09.631 } 00:24:09.631 ] 00:24:09.631 }' 00:24:09.631 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:09.631 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.198 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:10.198 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.198 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:10.198 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.198 [2024-11-27 04:45:57.676512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.199 [2024-11-27 04:45:57.772162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.199 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.458 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:10.458 "name": "raid_bdev1", 00:24:10.458 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:10.458 "strip_size_kb": 0, 00:24:10.458 "state": "online", 00:24:10.458 "raid_level": "raid1", 00:24:10.458 "superblock": true, 00:24:10.458 "num_base_bdevs": 2, 00:24:10.458 "num_base_bdevs_discovered": 1, 00:24:10.458 "num_base_bdevs_operational": 1, 00:24:10.458 "base_bdevs_list": [ 00:24:10.458 { 00:24:10.458 "name": null, 00:24:10.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.458 "is_configured": false, 00:24:10.458 "data_offset": 0, 00:24:10.458 "data_size": 7936 00:24:10.458 }, 00:24:10.458 { 00:24:10.458 "name": "BaseBdev2", 00:24:10.458 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:10.458 "is_configured": true, 00:24:10.458 "data_offset": 256, 00:24:10.458 "data_size": 7936 00:24:10.458 } 00:24:10.458 ] 00:24:10.458 }' 00:24:10.458 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:10.458 04:45:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.716 04:45:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:10.716 04:45:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.716 04:45:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.716 [2024-11-27 04:45:58.248304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:10.716 [2024-11-27 04:45:58.265455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:10.717 04:45:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.717 04:45:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:10.717 [2024-11-27 04:45:58.268170] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:11.654 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.655 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.655 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:11.655 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:11.655 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.914 "name": "raid_bdev1", 00:24:11.914 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:11.914 "strip_size_kb": 0, 00:24:11.914 "state": "online", 00:24:11.914 "raid_level": "raid1", 00:24:11.914 "superblock": true, 00:24:11.914 "num_base_bdevs": 2, 00:24:11.914 "num_base_bdevs_discovered": 2, 00:24:11.914 "num_base_bdevs_operational": 2, 00:24:11.914 "process": { 00:24:11.914 "type": "rebuild", 00:24:11.914 "target": "spare", 00:24:11.914 "progress": { 00:24:11.914 "blocks": 2560, 00:24:11.914 "percent": 32 00:24:11.914 } 00:24:11.914 }, 00:24:11.914 "base_bdevs_list": [ 00:24:11.914 { 00:24:11.914 "name": "spare", 00:24:11.914 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:11.914 "is_configured": true, 00:24:11.914 "data_offset": 256, 00:24:11.914 "data_size": 7936 00:24:11.914 }, 00:24:11.914 { 00:24:11.914 "name": "BaseBdev2", 00:24:11.914 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:11.914 "is_configured": true, 00:24:11.914 "data_offset": 256, 00:24:11.914 "data_size": 7936 00:24:11.914 } 00:24:11.914 ] 00:24:11.914 }' 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:11.914 [2024-11-27 04:45:59.437841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:11.914 [2024-11-27 04:45:59.477283] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:11.914 [2024-11-27 04:45:59.477754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.914 [2024-11-27 04:45:59.478031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:11.914 [2024-11-27 04:45:59.478176] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.914 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.174 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.174 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.174 "name": "raid_bdev1", 00:24:12.174 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:12.174 "strip_size_kb": 0, 00:24:12.174 "state": "online", 00:24:12.174 "raid_level": "raid1", 00:24:12.174 "superblock": true, 00:24:12.174 "num_base_bdevs": 2, 00:24:12.174 "num_base_bdevs_discovered": 1, 00:24:12.174 "num_base_bdevs_operational": 1, 00:24:12.174 "base_bdevs_list": [ 00:24:12.174 { 00:24:12.174 "name": null, 00:24:12.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.174 "is_configured": false, 00:24:12.174 "data_offset": 0, 00:24:12.174 "data_size": 7936 00:24:12.174 }, 00:24:12.174 { 00:24:12.174 "name": "BaseBdev2", 00:24:12.174 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:12.174 "is_configured": true, 00:24:12.174 "data_offset": 256, 00:24:12.174 "data_size": 7936 00:24:12.174 } 00:24:12.174 ] 00:24:12.174 }' 00:24:12.174 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.174 04:45:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.433 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:12.433 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:12.433 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:12.433 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:12.433 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:12.433 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.433 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.433 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.433 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.433 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.692 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:12.692 "name": "raid_bdev1", 00:24:12.692 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:12.692 "strip_size_kb": 0, 00:24:12.692 "state": "online", 00:24:12.692 "raid_level": "raid1", 00:24:12.692 "superblock": true, 00:24:12.692 "num_base_bdevs": 2, 00:24:12.692 "num_base_bdevs_discovered": 1, 00:24:12.692 "num_base_bdevs_operational": 1, 00:24:12.692 "base_bdevs_list": [ 00:24:12.692 { 00:24:12.692 "name": null, 00:24:12.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.692 "is_configured": false, 00:24:12.692 "data_offset": 0, 00:24:12.692 "data_size": 7936 00:24:12.692 }, 00:24:12.692 { 00:24:12.692 "name": "BaseBdev2", 00:24:12.692 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:12.692 "is_configured": true, 00:24:12.692 "data_offset": 256, 00:24:12.692 "data_size": 7936 00:24:12.692 } 00:24:12.692 ] 00:24:12.692 }' 00:24:12.692 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:12.692 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:12.692 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:12.692 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:12.692 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:12.692 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.692 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.692 [2024-11-27 04:46:00.182493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:12.692 [2024-11-27 04:46:00.199459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:12.692 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.692 04:46:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:12.692 [2024-11-27 04:46:00.202241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:13.627 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.627 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:13.627 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:13.627 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:13.627 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:13.627 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.627 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.627 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.627 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:13.627 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.885 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:13.885 "name": "raid_bdev1", 00:24:13.885 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:13.885 "strip_size_kb": 0, 00:24:13.885 "state": "online", 00:24:13.885 "raid_level": "raid1", 00:24:13.885 "superblock": true, 00:24:13.885 "num_base_bdevs": 2, 00:24:13.885 "num_base_bdevs_discovered": 2, 00:24:13.885 "num_base_bdevs_operational": 2, 00:24:13.885 "process": { 00:24:13.885 "type": "rebuild", 00:24:13.885 "target": "spare", 00:24:13.885 "progress": { 00:24:13.885 "blocks": 2560, 00:24:13.885 "percent": 32 00:24:13.885 } 00:24:13.885 }, 00:24:13.885 "base_bdevs_list": [ 00:24:13.885 { 00:24:13.885 "name": "spare", 00:24:13.885 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:13.885 "is_configured": true, 00:24:13.885 "data_offset": 256, 00:24:13.886 "data_size": 7936 00:24:13.886 }, 00:24:13.886 { 00:24:13.886 "name": "BaseBdev2", 00:24:13.886 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:13.886 "is_configured": true, 00:24:13.886 "data_offset": 256, 00:24:13.886 "data_size": 7936 00:24:13.886 } 00:24:13.886 ] 00:24:13.886 }' 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:13.886 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=802 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:13.886 "name": "raid_bdev1", 00:24:13.886 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:13.886 "strip_size_kb": 0, 00:24:13.886 "state": "online", 00:24:13.886 "raid_level": "raid1", 00:24:13.886 "superblock": true, 00:24:13.886 "num_base_bdevs": 2, 00:24:13.886 "num_base_bdevs_discovered": 2, 00:24:13.886 "num_base_bdevs_operational": 2, 00:24:13.886 "process": { 00:24:13.886 "type": "rebuild", 00:24:13.886 "target": "spare", 00:24:13.886 "progress": { 00:24:13.886 "blocks": 2816, 00:24:13.886 "percent": 35 00:24:13.886 } 00:24:13.886 }, 00:24:13.886 "base_bdevs_list": [ 00:24:13.886 { 00:24:13.886 "name": "spare", 00:24:13.886 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:13.886 "is_configured": true, 00:24:13.886 "data_offset": 256, 00:24:13.886 "data_size": 7936 00:24:13.886 }, 00:24:13.886 { 00:24:13.886 "name": "BaseBdev2", 00:24:13.886 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:13.886 "is_configured": true, 00:24:13.886 "data_offset": 256, 00:24:13.886 "data_size": 7936 00:24:13.886 } 00:24:13.886 ] 00:24:13.886 }' 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.886 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:14.143 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.143 04:46:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:15.076 "name": "raid_bdev1", 00:24:15.076 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:15.076 "strip_size_kb": 0, 00:24:15.076 "state": "online", 00:24:15.076 "raid_level": "raid1", 00:24:15.076 "superblock": true, 00:24:15.076 "num_base_bdevs": 2, 00:24:15.076 "num_base_bdevs_discovered": 2, 00:24:15.076 "num_base_bdevs_operational": 2, 00:24:15.076 "process": { 00:24:15.076 "type": "rebuild", 00:24:15.076 "target": "spare", 00:24:15.076 "progress": { 00:24:15.076 "blocks": 5888, 00:24:15.076 "percent": 74 00:24:15.076 } 00:24:15.076 }, 00:24:15.076 "base_bdevs_list": [ 00:24:15.076 { 00:24:15.076 "name": "spare", 00:24:15.076 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:15.076 "is_configured": true, 00:24:15.076 "data_offset": 256, 00:24:15.076 "data_size": 7936 00:24:15.076 }, 00:24:15.076 { 00:24:15.076 "name": "BaseBdev2", 00:24:15.076 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:15.076 "is_configured": true, 00:24:15.076 "data_offset": 256, 00:24:15.076 "data_size": 7936 00:24:15.076 } 00:24:15.076 ] 00:24:15.076 }' 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.076 04:46:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:16.009 [2024-11-27 04:46:03.327232] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:16.009 [2024-11-27 04:46:03.327342] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:16.009 [2024-11-27 04:46:03.327534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:16.268 "name": "raid_bdev1", 00:24:16.268 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:16.268 "strip_size_kb": 0, 00:24:16.268 "state": "online", 00:24:16.268 "raid_level": "raid1", 00:24:16.268 "superblock": true, 00:24:16.268 "num_base_bdevs": 2, 00:24:16.268 "num_base_bdevs_discovered": 2, 00:24:16.268 "num_base_bdevs_operational": 2, 00:24:16.268 "base_bdevs_list": [ 00:24:16.268 { 00:24:16.268 "name": "spare", 00:24:16.268 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:16.268 "is_configured": true, 00:24:16.268 "data_offset": 256, 00:24:16.268 "data_size": 7936 00:24:16.268 }, 00:24:16.268 { 00:24:16.268 "name": "BaseBdev2", 00:24:16.268 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:16.268 "is_configured": true, 00:24:16.268 "data_offset": 256, 00:24:16.268 "data_size": 7936 00:24:16.268 } 00:24:16.268 ] 00:24:16.268 }' 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:16.268 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.526 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:16.526 "name": "raid_bdev1", 00:24:16.526 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:16.526 "strip_size_kb": 0, 00:24:16.526 "state": "online", 00:24:16.526 "raid_level": "raid1", 00:24:16.526 "superblock": true, 00:24:16.526 "num_base_bdevs": 2, 00:24:16.526 "num_base_bdevs_discovered": 2, 00:24:16.526 "num_base_bdevs_operational": 2, 00:24:16.526 "base_bdevs_list": [ 00:24:16.526 { 00:24:16.526 "name": "spare", 00:24:16.526 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:16.526 "is_configured": true, 00:24:16.526 "data_offset": 256, 00:24:16.526 "data_size": 7936 00:24:16.526 }, 00:24:16.526 { 00:24:16.526 "name": "BaseBdev2", 00:24:16.526 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:16.526 "is_configured": true, 00:24:16.526 "data_offset": 256, 00:24:16.526 "data_size": 7936 00:24:16.526 } 00:24:16.526 ] 00:24:16.526 }' 00:24:16.526 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:16.526 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:16.526 04:46:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:16.526 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:16.526 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:16.527 "name": "raid_bdev1", 00:24:16.527 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:16.527 "strip_size_kb": 0, 00:24:16.527 "state": "online", 00:24:16.527 "raid_level": "raid1", 00:24:16.527 "superblock": true, 00:24:16.527 "num_base_bdevs": 2, 00:24:16.527 "num_base_bdevs_discovered": 2, 00:24:16.527 "num_base_bdevs_operational": 2, 00:24:16.527 "base_bdevs_list": [ 00:24:16.527 { 00:24:16.527 "name": "spare", 00:24:16.527 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:16.527 "is_configured": true, 00:24:16.527 "data_offset": 256, 00:24:16.527 "data_size": 7936 00:24:16.527 }, 00:24:16.527 { 00:24:16.527 "name": "BaseBdev2", 00:24:16.527 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:16.527 "is_configured": true, 00:24:16.527 "data_offset": 256, 00:24:16.527 "data_size": 7936 00:24:16.527 } 00:24:16.527 ] 00:24:16.527 }' 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:16.527 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.096 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:17.096 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.096 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.096 [2024-11-27 04:46:04.532075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:17.096 [2024-11-27 04:46:04.532297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:17.096 [2024-11-27 04:46:04.532541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:17.096 [2024-11-27 04:46:04.532805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:17.097 [2024-11-27 04:46:04.532834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.097 [2024-11-27 04:46:04.604052] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:17.097 [2024-11-27 04:46:04.604289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:17.097 [2024-11-27 04:46:04.604332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:17.097 [2024-11-27 04:46:04.604349] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:17.097 [2024-11-27 04:46:04.607017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:17.097 [2024-11-27 04:46:04.607062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:17.097 [2024-11-27 04:46:04.607156] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:17.097 [2024-11-27 04:46:04.607237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.097 [2024-11-27 04:46:04.607395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:17.097 spare 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.097 [2024-11-27 04:46:04.707519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:17.097 [2024-11-27 04:46:04.707738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:17.097 [2024-11-27 04:46:04.707921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:17.097 [2024-11-27 04:46:04.708060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:17.097 [2024-11-27 04:46:04.708079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:17.097 [2024-11-27 04:46:04.708235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:17.097 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:17.365 "name": "raid_bdev1", 00:24:17.365 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:17.365 "strip_size_kb": 0, 00:24:17.365 "state": "online", 00:24:17.365 "raid_level": "raid1", 00:24:17.365 "superblock": true, 00:24:17.365 "num_base_bdevs": 2, 00:24:17.365 "num_base_bdevs_discovered": 2, 00:24:17.365 "num_base_bdevs_operational": 2, 00:24:17.365 "base_bdevs_list": [ 00:24:17.365 { 00:24:17.365 "name": "spare", 00:24:17.365 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:17.365 "is_configured": true, 00:24:17.365 "data_offset": 256, 00:24:17.365 "data_size": 7936 00:24:17.365 }, 00:24:17.365 { 00:24:17.365 "name": "BaseBdev2", 00:24:17.365 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:17.365 "is_configured": true, 00:24:17.365 "data_offset": 256, 00:24:17.365 "data_size": 7936 00:24:17.365 } 00:24:17.365 ] 00:24:17.365 }' 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:17.365 04:46:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.623 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:17.623 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:17.623 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:17.623 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:17.623 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:17.623 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.623 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.623 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.623 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:17.882 "name": "raid_bdev1", 00:24:17.882 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:17.882 "strip_size_kb": 0, 00:24:17.882 "state": "online", 00:24:17.882 "raid_level": "raid1", 00:24:17.882 "superblock": true, 00:24:17.882 "num_base_bdevs": 2, 00:24:17.882 "num_base_bdevs_discovered": 2, 00:24:17.882 "num_base_bdevs_operational": 2, 00:24:17.882 "base_bdevs_list": [ 00:24:17.882 { 00:24:17.882 "name": "spare", 00:24:17.882 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:17.882 "is_configured": true, 00:24:17.882 "data_offset": 256, 00:24:17.882 "data_size": 7936 00:24:17.882 }, 00:24:17.882 { 00:24:17.882 "name": "BaseBdev2", 00:24:17.882 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:17.882 "is_configured": true, 00:24:17.882 "data_offset": 256, 00:24:17.882 "data_size": 7936 00:24:17.882 } 00:24:17.882 ] 00:24:17.882 }' 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.882 [2024-11-27 04:46:05.444535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:17.882 "name": "raid_bdev1", 00:24:17.882 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:17.882 "strip_size_kb": 0, 00:24:17.882 "state": "online", 00:24:17.882 "raid_level": "raid1", 00:24:17.882 "superblock": true, 00:24:17.882 "num_base_bdevs": 2, 00:24:17.882 "num_base_bdevs_discovered": 1, 00:24:17.882 "num_base_bdevs_operational": 1, 00:24:17.882 "base_bdevs_list": [ 00:24:17.882 { 00:24:17.882 "name": null, 00:24:17.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.882 "is_configured": false, 00:24:17.882 "data_offset": 0, 00:24:17.882 "data_size": 7936 00:24:17.882 }, 00:24:17.882 { 00:24:17.882 "name": "BaseBdev2", 00:24:17.882 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:17.882 "is_configured": true, 00:24:17.882 "data_offset": 256, 00:24:17.882 "data_size": 7936 00:24:17.882 } 00:24:17.882 ] 00:24:17.882 }' 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:17.882 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:18.447 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:18.447 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.447 04:46:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:18.447 [2024-11-27 04:46:05.996710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:18.447 [2024-11-27 04:46:05.996998] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:18.447 [2024-11-27 04:46:05.997027] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:18.447 [2024-11-27 04:46:05.997076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:18.447 [2024-11-27 04:46:06.013182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:18.447 04:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.447 04:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:18.447 [2024-11-27 04:46:06.015856] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:19.823 "name": "raid_bdev1", 00:24:19.823 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:19.823 "strip_size_kb": 0, 00:24:19.823 "state": "online", 00:24:19.823 "raid_level": "raid1", 00:24:19.823 "superblock": true, 00:24:19.823 "num_base_bdevs": 2, 00:24:19.823 "num_base_bdevs_discovered": 2, 00:24:19.823 "num_base_bdevs_operational": 2, 00:24:19.823 "process": { 00:24:19.823 "type": "rebuild", 00:24:19.823 "target": "spare", 00:24:19.823 "progress": { 00:24:19.823 "blocks": 2560, 00:24:19.823 "percent": 32 00:24:19.823 } 00:24:19.823 }, 00:24:19.823 "base_bdevs_list": [ 00:24:19.823 { 00:24:19.823 "name": "spare", 00:24:19.823 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:19.823 "is_configured": true, 00:24:19.823 "data_offset": 256, 00:24:19.823 "data_size": 7936 00:24:19.823 }, 00:24:19.823 { 00:24:19.823 "name": "BaseBdev2", 00:24:19.823 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:19.823 "is_configured": true, 00:24:19.823 "data_offset": 256, 00:24:19.823 "data_size": 7936 00:24:19.823 } 00:24:19.823 ] 00:24:19.823 }' 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:19.823 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:19.824 [2024-11-27 04:46:07.205269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:19.824 [2024-11-27 04:46:07.225183] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:19.824 [2024-11-27 04:46:07.225270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.824 [2024-11-27 04:46:07.225295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:19.824 [2024-11-27 04:46:07.225310] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.824 "name": "raid_bdev1", 00:24:19.824 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:19.824 "strip_size_kb": 0, 00:24:19.824 "state": "online", 00:24:19.824 "raid_level": "raid1", 00:24:19.824 "superblock": true, 00:24:19.824 "num_base_bdevs": 2, 00:24:19.824 "num_base_bdevs_discovered": 1, 00:24:19.824 "num_base_bdevs_operational": 1, 00:24:19.824 "base_bdevs_list": [ 00:24:19.824 { 00:24:19.824 "name": null, 00:24:19.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.824 "is_configured": false, 00:24:19.824 "data_offset": 0, 00:24:19.824 "data_size": 7936 00:24:19.824 }, 00:24:19.824 { 00:24:19.824 "name": "BaseBdev2", 00:24:19.824 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:19.824 "is_configured": true, 00:24:19.824 "data_offset": 256, 00:24:19.824 "data_size": 7936 00:24:19.824 } 00:24:19.824 ] 00:24:19.824 }' 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.824 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:20.390 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:20.390 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.390 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:20.390 [2024-11-27 04:46:07.765565] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:20.390 [2024-11-27 04:46:07.765661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.390 [2024-11-27 04:46:07.765704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:20.390 [2024-11-27 04:46:07.765724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.390 [2024-11-27 04:46:07.766022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.390 [2024-11-27 04:46:07.766055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:20.390 [2024-11-27 04:46:07.766130] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:20.390 [2024-11-27 04:46:07.766153] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:20.390 [2024-11-27 04:46:07.766171] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:20.390 [2024-11-27 04:46:07.766203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:20.390 [2024-11-27 04:46:07.781987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:20.390 spare 00:24:20.390 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.390 04:46:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:20.390 [2024-11-27 04:46:07.784581] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:21.326 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.326 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:21.326 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:21.326 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:21.326 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:21.326 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.326 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.326 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.327 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:21.327 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.327 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:21.327 "name": "raid_bdev1", 00:24:21.327 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:21.327 "strip_size_kb": 0, 00:24:21.327 "state": "online", 00:24:21.327 "raid_level": "raid1", 00:24:21.327 "superblock": true, 00:24:21.327 "num_base_bdevs": 2, 00:24:21.327 "num_base_bdevs_discovered": 2, 00:24:21.327 "num_base_bdevs_operational": 2, 00:24:21.327 "process": { 00:24:21.327 "type": "rebuild", 00:24:21.327 "target": "spare", 00:24:21.327 "progress": { 00:24:21.327 "blocks": 2560, 00:24:21.327 "percent": 32 00:24:21.327 } 00:24:21.327 }, 00:24:21.327 "base_bdevs_list": [ 00:24:21.327 { 00:24:21.327 "name": "spare", 00:24:21.327 "uuid": "d4bd19f1-5344-5f95-b621-a028be23c67b", 00:24:21.327 "is_configured": true, 00:24:21.327 "data_offset": 256, 00:24:21.327 "data_size": 7936 00:24:21.327 }, 00:24:21.327 { 00:24:21.327 "name": "BaseBdev2", 00:24:21.327 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:21.327 "is_configured": true, 00:24:21.327 "data_offset": 256, 00:24:21.327 "data_size": 7936 00:24:21.327 } 00:24:21.327 ] 00:24:21.327 }' 00:24:21.327 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:21.327 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:21.327 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:21.586 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.586 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:21.586 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.586 04:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:21.586 [2024-11-27 04:46:08.953989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:21.586 [2024-11-27 04:46:08.993943] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:21.586 [2024-11-27 04:46:08.994038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.586 [2024-11-27 04:46:08.994067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:21.586 [2024-11-27 04:46:08.994078] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.586 "name": "raid_bdev1", 00:24:21.586 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:21.586 "strip_size_kb": 0, 00:24:21.586 "state": "online", 00:24:21.586 "raid_level": "raid1", 00:24:21.586 "superblock": true, 00:24:21.586 "num_base_bdevs": 2, 00:24:21.586 "num_base_bdevs_discovered": 1, 00:24:21.586 "num_base_bdevs_operational": 1, 00:24:21.586 "base_bdevs_list": [ 00:24:21.586 { 00:24:21.586 "name": null, 00:24:21.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.586 "is_configured": false, 00:24:21.586 "data_offset": 0, 00:24:21.586 "data_size": 7936 00:24:21.586 }, 00:24:21.586 { 00:24:21.586 "name": "BaseBdev2", 00:24:21.586 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:21.586 "is_configured": true, 00:24:21.586 "data_offset": 256, 00:24:21.586 "data_size": 7936 00:24:21.586 } 00:24:21.586 ] 00:24:21.586 }' 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.586 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.151 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:22.151 "name": "raid_bdev1", 00:24:22.151 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:22.151 "strip_size_kb": 0, 00:24:22.151 "state": "online", 00:24:22.151 "raid_level": "raid1", 00:24:22.151 "superblock": true, 00:24:22.151 "num_base_bdevs": 2, 00:24:22.151 "num_base_bdevs_discovered": 1, 00:24:22.151 "num_base_bdevs_operational": 1, 00:24:22.151 "base_bdevs_list": [ 00:24:22.151 { 00:24:22.151 "name": null, 00:24:22.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.151 "is_configured": false, 00:24:22.151 "data_offset": 0, 00:24:22.151 "data_size": 7936 00:24:22.151 }, 00:24:22.151 { 00:24:22.151 "name": "BaseBdev2", 00:24:22.151 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:22.151 "is_configured": true, 00:24:22.151 "data_offset": 256, 00:24:22.151 "data_size": 7936 00:24:22.151 } 00:24:22.151 ] 00:24:22.151 }' 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:22.152 [2024-11-27 04:46:09.706333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:22.152 [2024-11-27 04:46:09.706406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.152 [2024-11-27 04:46:09.706441] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:22.152 [2024-11-27 04:46:09.706457] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.152 [2024-11-27 04:46:09.706701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.152 [2024-11-27 04:46:09.706737] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:22.152 [2024-11-27 04:46:09.706830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:22.152 [2024-11-27 04:46:09.706853] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:22.152 [2024-11-27 04:46:09.706876] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:22.152 [2024-11-27 04:46:09.706890] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:22.152 BaseBdev1 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.152 04:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:23.526 "name": "raid_bdev1", 00:24:23.526 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:23.526 "strip_size_kb": 0, 00:24:23.526 "state": "online", 00:24:23.526 "raid_level": "raid1", 00:24:23.526 "superblock": true, 00:24:23.526 "num_base_bdevs": 2, 00:24:23.526 "num_base_bdevs_discovered": 1, 00:24:23.526 "num_base_bdevs_operational": 1, 00:24:23.526 "base_bdevs_list": [ 00:24:23.526 { 00:24:23.526 "name": null, 00:24:23.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.526 "is_configured": false, 00:24:23.526 "data_offset": 0, 00:24:23.526 "data_size": 7936 00:24:23.526 }, 00:24:23.526 { 00:24:23.526 "name": "BaseBdev2", 00:24:23.526 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:23.526 "is_configured": true, 00:24:23.526 "data_offset": 256, 00:24:23.526 "data_size": 7936 00:24:23.526 } 00:24:23.526 ] 00:24:23.526 }' 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:23.526 04:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:23.785 "name": "raid_bdev1", 00:24:23.785 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:23.785 "strip_size_kb": 0, 00:24:23.785 "state": "online", 00:24:23.785 "raid_level": "raid1", 00:24:23.785 "superblock": true, 00:24:23.785 "num_base_bdevs": 2, 00:24:23.785 "num_base_bdevs_discovered": 1, 00:24:23.785 "num_base_bdevs_operational": 1, 00:24:23.785 "base_bdevs_list": [ 00:24:23.785 { 00:24:23.785 "name": null, 00:24:23.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.785 "is_configured": false, 00:24:23.785 "data_offset": 0, 00:24:23.785 "data_size": 7936 00:24:23.785 }, 00:24:23.785 { 00:24:23.785 "name": "BaseBdev2", 00:24:23.785 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:23.785 "is_configured": true, 00:24:23.785 "data_offset": 256, 00:24:23.785 "data_size": 7936 00:24:23.785 } 00:24:23.785 ] 00:24:23.785 }' 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:23.785 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:24.044 [2024-11-27 04:46:11.430979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:24.044 [2024-11-27 04:46:11.431194] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:24.044 [2024-11-27 04:46:11.431237] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:24.044 request: 00:24:24.044 { 00:24:24.044 "base_bdev": "BaseBdev1", 00:24:24.044 "raid_bdev": "raid_bdev1", 00:24:24.044 "method": "bdev_raid_add_base_bdev", 00:24:24.044 "req_id": 1 00:24:24.044 } 00:24:24.044 Got JSON-RPC error response 00:24:24.044 response: 00:24:24.044 { 00:24:24.044 "code": -22, 00:24:24.044 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:24.044 } 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:24.044 04:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:24.980 "name": "raid_bdev1", 00:24:24.980 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:24.980 "strip_size_kb": 0, 00:24:24.980 "state": "online", 00:24:24.980 "raid_level": "raid1", 00:24:24.980 "superblock": true, 00:24:24.980 "num_base_bdevs": 2, 00:24:24.980 "num_base_bdevs_discovered": 1, 00:24:24.980 "num_base_bdevs_operational": 1, 00:24:24.980 "base_bdevs_list": [ 00:24:24.980 { 00:24:24.980 "name": null, 00:24:24.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.980 "is_configured": false, 00:24:24.980 "data_offset": 0, 00:24:24.980 "data_size": 7936 00:24:24.980 }, 00:24:24.980 { 00:24:24.980 "name": "BaseBdev2", 00:24:24.980 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:24.980 "is_configured": true, 00:24:24.980 "data_offset": 256, 00:24:24.980 "data_size": 7936 00:24:24.980 } 00:24:24.980 ] 00:24:24.980 }' 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:24.980 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.547 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:25.547 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:25.547 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:25.547 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:25.548 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:25.548 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.548 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.548 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.548 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.548 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.548 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:25.548 "name": "raid_bdev1", 00:24:25.548 "uuid": "d094659d-21ba-4960-adce-0bfb64379ab4", 00:24:25.548 "strip_size_kb": 0, 00:24:25.548 "state": "online", 00:24:25.548 "raid_level": "raid1", 00:24:25.548 "superblock": true, 00:24:25.548 "num_base_bdevs": 2, 00:24:25.548 "num_base_bdevs_discovered": 1, 00:24:25.548 "num_base_bdevs_operational": 1, 00:24:25.548 "base_bdevs_list": [ 00:24:25.548 { 00:24:25.548 "name": null, 00:24:25.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.548 "is_configured": false, 00:24:25.548 "data_offset": 0, 00:24:25.548 "data_size": 7936 00:24:25.548 }, 00:24:25.548 { 00:24:25.548 "name": "BaseBdev2", 00:24:25.548 "uuid": "e696138b-feb1-52a3-b24a-afc915802062", 00:24:25.548 "is_configured": true, 00:24:25.548 "data_offset": 256, 00:24:25.548 "data_size": 7936 00:24:25.548 } 00:24:25.548 ] 00:24:25.548 }' 00:24:25.548 04:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89671 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89671 ']' 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89671 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89671 00:24:25.548 killing process with pid 89671 00:24:25.548 Received shutdown signal, test time was about 60.000000 seconds 00:24:25.548 00:24:25.548 Latency(us) 00:24:25.548 [2024-11-27T04:46:13.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.548 [2024-11-27T04:46:13.171Z] =================================================================================================================== 00:24:25.548 [2024-11-27T04:46:13.171Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89671' 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89671 00:24:25.548 [2024-11-27 04:46:13.132630] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:25.548 04:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89671 00:24:25.548 [2024-11-27 04:46:13.132819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:25.548 [2024-11-27 04:46:13.132901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:25.548 [2024-11-27 04:46:13.132924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:25.806 [2024-11-27 04:46:13.395871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:27.181 04:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:24:27.181 00:24:27.181 real 0m18.572s 00:24:27.181 user 0m25.375s 00:24:27.181 sys 0m1.436s 00:24:27.181 ************************************ 00:24:27.181 END TEST raid_rebuild_test_sb_md_interleaved 00:24:27.181 ************************************ 00:24:27.181 04:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.181 04:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:27.181 04:46:14 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:24:27.181 04:46:14 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:24:27.181 04:46:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89671 ']' 00:24:27.181 04:46:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89671 00:24:27.181 04:46:14 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:24:27.181 ************************************ 00:24:27.181 END TEST bdev_raid 00:24:27.181 ************************************ 00:24:27.181 00:24:27.181 real 13m4.527s 00:24:27.181 user 18m26.673s 00:24:27.181 sys 1m44.816s 00:24:27.181 04:46:14 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.181 04:46:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:27.181 04:46:14 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:27.181 04:46:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:27.181 04:46:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.181 04:46:14 -- common/autotest_common.sh@10 -- # set +x 00:24:27.181 ************************************ 00:24:27.181 START TEST spdkcli_raid 00:24:27.181 ************************************ 00:24:27.181 04:46:14 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:27.181 * Looking for test storage... 00:24:27.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:27.181 04:46:14 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:27.181 04:46:14 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:27.181 04:46:14 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:27.181 04:46:14 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:24:27.181 04:46:14 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.182 04:46:14 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:24:27.182 04:46:14 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:24:27.182 04:46:14 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.182 04:46:14 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:24:27.182 04:46:14 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.182 04:46:14 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.182 04:46:14 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.182 04:46:14 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:27.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.182 --rc genhtml_branch_coverage=1 00:24:27.182 --rc genhtml_function_coverage=1 00:24:27.182 --rc genhtml_legend=1 00:24:27.182 --rc geninfo_all_blocks=1 00:24:27.182 --rc geninfo_unexecuted_blocks=1 00:24:27.182 00:24:27.182 ' 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:27.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.182 --rc genhtml_branch_coverage=1 00:24:27.182 --rc genhtml_function_coverage=1 00:24:27.182 --rc genhtml_legend=1 00:24:27.182 --rc geninfo_all_blocks=1 00:24:27.182 --rc geninfo_unexecuted_blocks=1 00:24:27.182 00:24:27.182 ' 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:27.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.182 --rc genhtml_branch_coverage=1 00:24:27.182 --rc genhtml_function_coverage=1 00:24:27.182 --rc genhtml_legend=1 00:24:27.182 --rc geninfo_all_blocks=1 00:24:27.182 --rc geninfo_unexecuted_blocks=1 00:24:27.182 00:24:27.182 ' 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:27.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.182 --rc genhtml_branch_coverage=1 00:24:27.182 --rc genhtml_function_coverage=1 00:24:27.182 --rc genhtml_legend=1 00:24:27.182 --rc geninfo_all_blocks=1 00:24:27.182 --rc geninfo_unexecuted_blocks=1 00:24:27.182 00:24:27.182 ' 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:24:27.182 04:46:14 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:27.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90352 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:24:27.182 04:46:14 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90352 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90352 ']' 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.182 04:46:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:27.440 [2024-11-27 04:46:14.909190] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:27.440 [2024-11-27 04:46:14.909377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90352 ] 00:24:27.698 [2024-11-27 04:46:15.096046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:27.698 [2024-11-27 04:46:15.235884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.698 [2024-11-27 04:46:15.235910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.633 04:46:16 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.633 04:46:16 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:24:28.633 04:46:16 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:24:28.633 04:46:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.633 04:46:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:28.633 04:46:16 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:24:28.633 04:46:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.633 04:46:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:28.633 04:46:16 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:28.633 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:28.633 ' 00:24:30.536 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:24:30.536 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:24:30.536 04:46:17 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:24:30.536 04:46:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:30.536 04:46:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:30.536 04:46:17 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:24:30.536 04:46:17 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.536 04:46:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:30.536 04:46:17 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:24:30.536 ' 00:24:31.471 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:24:31.735 04:46:19 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:24:31.735 04:46:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.735 04:46:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:31.735 04:46:19 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:24:31.735 04:46:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.735 04:46:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:31.735 04:46:19 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:24:31.735 04:46:19 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:24:32.303 04:46:19 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:24:32.303 04:46:19 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:24:32.303 04:46:19 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:24:32.303 04:46:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.303 04:46:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:32.303 04:46:19 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:24:32.303 04:46:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.303 04:46:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:32.303 04:46:19 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:24:32.303 ' 00:24:33.681 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:24:33.681 04:46:20 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:24:33.681 04:46:20 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:33.681 04:46:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:33.681 04:46:21 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:24:33.681 04:46:21 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:33.681 04:46:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:33.681 04:46:21 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:24:33.681 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:24:33.681 ' 00:24:35.058 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:24:35.058 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:24:35.058 04:46:22 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:35.058 04:46:22 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90352 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90352 ']' 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90352 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90352 00:24:35.058 killing process with pid 90352 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90352' 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90352 00:24:35.058 04:46:22 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90352 00:24:37.590 Process with pid 90352 is not found 00:24:37.590 04:46:24 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:24:37.590 04:46:24 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90352 ']' 00:24:37.590 04:46:24 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90352 00:24:37.590 04:46:24 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90352 ']' 00:24:37.590 04:46:24 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90352 00:24:37.590 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90352) - No such process 00:24:37.590 04:46:24 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90352 is not found' 00:24:37.590 04:46:24 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:24:37.590 04:46:24 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:37.590 04:46:24 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:37.590 04:46:24 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:37.590 ************************************ 00:24:37.590 END TEST spdkcli_raid 00:24:37.590 ************************************ 00:24:37.590 00:24:37.590 real 0m10.349s 00:24:37.590 user 0m21.451s 00:24:37.590 sys 0m1.171s 00:24:37.590 04:46:24 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.590 04:46:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:37.590 04:46:24 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:24:37.590 04:46:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.590 04:46:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.590 04:46:24 -- common/autotest_common.sh@10 -- # set +x 00:24:37.590 ************************************ 00:24:37.590 START TEST blockdev_raid5f 00:24:37.590 ************************************ 00:24:37.590 04:46:24 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:24:37.590 * Looking for test storage... 00:24:37.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:37.590 04:46:25 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:37.590 04:46:25 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:24:37.590 04:46:25 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:37.590 04:46:25 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.590 04:46:25 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:24:37.590 04:46:25 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.590 04:46:25 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:37.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.590 --rc genhtml_branch_coverage=1 00:24:37.590 --rc genhtml_function_coverage=1 00:24:37.590 --rc genhtml_legend=1 00:24:37.590 --rc geninfo_all_blocks=1 00:24:37.590 --rc geninfo_unexecuted_blocks=1 00:24:37.590 00:24:37.590 ' 00:24:37.590 04:46:25 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:37.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.590 --rc genhtml_branch_coverage=1 00:24:37.590 --rc genhtml_function_coverage=1 00:24:37.590 --rc genhtml_legend=1 00:24:37.590 --rc geninfo_all_blocks=1 00:24:37.590 --rc geninfo_unexecuted_blocks=1 00:24:37.590 00:24:37.590 ' 00:24:37.590 04:46:25 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:37.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.590 --rc genhtml_branch_coverage=1 00:24:37.590 --rc genhtml_function_coverage=1 00:24:37.590 --rc genhtml_legend=1 00:24:37.590 --rc geninfo_all_blocks=1 00:24:37.590 --rc geninfo_unexecuted_blocks=1 00:24:37.590 00:24:37.590 ' 00:24:37.590 04:46:25 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:37.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.590 --rc genhtml_branch_coverage=1 00:24:37.590 --rc genhtml_function_coverage=1 00:24:37.590 --rc genhtml_legend=1 00:24:37.590 --rc geninfo_all_blocks=1 00:24:37.590 --rc geninfo_unexecuted_blocks=1 00:24:37.590 00:24:37.590 ' 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90632 00:24:37.590 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:37.591 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:37.591 04:46:25 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90632 00:24:37.591 04:46:25 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90632 ']' 00:24:37.591 04:46:25 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.591 04:46:25 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.591 04:46:25 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.591 04:46:25 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.591 04:46:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:37.849 [2024-11-27 04:46:25.283202] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:37.849 [2024-11-27 04:46:25.283397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90632 ] 00:24:37.849 [2024-11-27 04:46:25.469332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.108 [2024-11-27 04:46:25.606284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:39.189 Malloc0 00:24:39.189 Malloc1 00:24:39.189 Malloc2 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:39.189 04:46:26 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "cc61d28c-909d-4232-ad27-e3763e0b3bee"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cc61d28c-909d-4232-ad27-e3763e0b3bee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "cc61d28c-909d-4232-ad27-e3763e0b3bee",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "3aad7ae0-1268-40a0-b55b-0ad11666c4c7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e6a247f2-42b9-4c53-9f58-98bf30969ec5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c2585965-0c32-48ac-b9d8-b879c4a78e2b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:24:39.189 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:24:39.449 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:24:39.449 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:24:39.449 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:24:39.449 04:46:26 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90632 00:24:39.449 04:46:26 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90632 ']' 00:24:39.449 04:46:26 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90632 00:24:39.449 04:46:26 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:24:39.449 04:46:26 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.449 04:46:26 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90632 00:24:39.449 killing process with pid 90632 00:24:39.449 04:46:26 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.449 04:46:26 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.449 04:46:26 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90632' 00:24:39.449 04:46:26 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90632 00:24:39.449 04:46:26 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90632 00:24:41.982 04:46:29 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:41.982 04:46:29 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:24:41.982 04:46:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:41.982 04:46:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.982 04:46:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:41.982 ************************************ 00:24:41.983 START TEST bdev_hello_world 00:24:41.983 ************************************ 00:24:41.983 04:46:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:24:41.983 [2024-11-27 04:46:29.472989] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:41.983 [2024-11-27 04:46:29.473179] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90694 ] 00:24:42.242 [2024-11-27 04:46:29.656226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.242 [2024-11-27 04:46:29.787319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.809 [2024-11-27 04:46:30.332016] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:42.809 [2024-11-27 04:46:30.332075] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:24:42.809 [2024-11-27 04:46:30.332116] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:42.809 [2024-11-27 04:46:30.332737] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:42.809 [2024-11-27 04:46:30.332946] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:42.809 [2024-11-27 04:46:30.332982] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:42.809 [2024-11-27 04:46:30.333059] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:42.809 00:24:42.809 [2024-11-27 04:46:30.333089] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:44.186 00:24:44.186 real 0m2.277s 00:24:44.186 user 0m1.840s 00:24:44.186 sys 0m0.311s 00:24:44.187 04:46:31 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.187 ************************************ 00:24:44.187 END TEST bdev_hello_world 00:24:44.187 ************************************ 00:24:44.187 04:46:31 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:24:44.187 04:46:31 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:24:44.187 04:46:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:44.187 04:46:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.187 04:46:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:44.187 ************************************ 00:24:44.187 START TEST bdev_bounds 00:24:44.187 ************************************ 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:24:44.187 Process bdevio pid: 90742 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90742 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90742' 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90742 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90742 ']' 00:24:44.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.187 04:46:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:44.187 [2024-11-27 04:46:31.801840] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:44.187 [2024-11-27 04:46:31.802291] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90742 ] 00:24:44.446 [2024-11-27 04:46:31.989816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:44.704 [2024-11-27 04:46:32.127083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.704 [2024-11-27 04:46:32.127191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.704 [2024-11-27 04:46:32.127197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.312 04:46:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.312 04:46:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:24:45.312 04:46:32 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:45.571 I/O targets: 00:24:45.571 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:24:45.571 00:24:45.571 00:24:45.571 CUnit - A unit testing framework for C - Version 2.1-3 00:24:45.571 http://cunit.sourceforge.net/ 00:24:45.571 00:24:45.571 00:24:45.571 Suite: bdevio tests on: raid5f 00:24:45.571 Test: blockdev write read block ...passed 00:24:45.571 Test: blockdev write zeroes read block ...passed 00:24:45.571 Test: blockdev write zeroes read no split ...passed 00:24:45.571 Test: blockdev write zeroes read split ...passed 00:24:45.571 Test: blockdev write zeroes read split partial ...passed 00:24:45.571 Test: blockdev reset ...passed 00:24:45.571 Test: blockdev write read 8 blocks ...passed 00:24:45.571 Test: blockdev write read size > 128k ...passed 00:24:45.571 Test: blockdev write read invalid size ...passed 00:24:45.571 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:45.571 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:45.571 Test: blockdev write read max offset ...passed 00:24:45.571 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:45.571 Test: blockdev writev readv 8 blocks ...passed 00:24:45.571 Test: blockdev writev readv 30 x 1block ...passed 00:24:45.571 Test: blockdev writev readv block ...passed 00:24:45.571 Test: blockdev writev readv size > 128k ...passed 00:24:45.571 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:45.571 Test: blockdev comparev and writev ...passed 00:24:45.571 Test: blockdev nvme passthru rw ...passed 00:24:45.571 Test: blockdev nvme passthru vendor specific ...passed 00:24:45.571 Test: blockdev nvme admin passthru ...passed 00:24:45.571 Test: blockdev copy ...passed 00:24:45.571 00:24:45.571 Run Summary: Type Total Ran Passed Failed Inactive 00:24:45.571 suites 1 1 n/a 0 0 00:24:45.571 tests 23 23 23 0 0 00:24:45.571 asserts 130 130 130 0 n/a 00:24:45.571 00:24:45.571 Elapsed time = 0.563 seconds 00:24:45.571 0 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90742 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90742 ']' 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90742 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90742 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90742' 00:24:45.830 killing process with pid 90742 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90742 00:24:45.830 04:46:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90742 00:24:47.205 04:46:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:24:47.205 00:24:47.205 real 0m2.862s 00:24:47.205 user 0m7.135s 00:24:47.205 sys 0m0.463s 00:24:47.205 04:46:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.205 04:46:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:47.205 ************************************ 00:24:47.205 END TEST bdev_bounds 00:24:47.205 ************************************ 00:24:47.205 04:46:34 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:24:47.205 04:46:34 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:47.205 04:46:34 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.205 04:46:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:47.205 ************************************ 00:24:47.205 START TEST bdev_nbd 00:24:47.205 ************************************ 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90802 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90802 /var/tmp/spdk-nbd.sock 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90802 ']' 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:47.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.205 04:46:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:47.205 [2024-11-27 04:46:34.729977] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:47.206 [2024-11-27 04:46:34.730338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.463 [2024-11-27 04:46:34.913852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.463 [2024-11-27 04:46:35.072623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:48.398 1+0 records in 00:24:48.398 1+0 records out 00:24:48.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291567 s, 14.0 MB/s 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:48.398 04:46:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:48.965 { 00:24:48.965 "nbd_device": "/dev/nbd0", 00:24:48.965 "bdev_name": "raid5f" 00:24:48.965 } 00:24:48.965 ]' 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:48.965 { 00:24:48.965 "nbd_device": "/dev/nbd0", 00:24:48.965 "bdev_name": "raid5f" 00:24:48.965 } 00:24:48.965 ]' 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:48.965 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.224 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:24:49.483 04:46:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.483 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.483 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:24:49.741 /dev/nbd0 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.741 1+0 records in 00:24:49.741 1+0 records out 00:24:49.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253618 s, 16.2 MB/s 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.741 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:50.327 { 00:24:50.327 "nbd_device": "/dev/nbd0", 00:24:50.327 "bdev_name": "raid5f" 00:24:50.327 } 00:24:50.327 ]' 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:50.327 { 00:24:50.327 "nbd_device": "/dev/nbd0", 00:24:50.327 "bdev_name": "raid5f" 00:24:50.327 } 00:24:50.327 ]' 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:50.327 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:50.327 256+0 records in 00:24:50.327 256+0 records out 00:24:50.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00700796 s, 150 MB/s 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:50.328 256+0 records in 00:24:50.328 256+0 records out 00:24:50.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0410834 s, 25.5 MB/s 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.328 04:46:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:50.586 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:50.844 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:50.845 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:24:50.845 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:51.413 malloc_lvol_verify 00:24:51.413 04:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:51.672 df21f86b-8db2-4e46-9e37-7c23e337e287 00:24:51.672 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:51.929 00e79447-ca12-4c07-8002-fd6d7ffec067 00:24:51.930 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:52.213 /dev/nbd0 00:24:52.213 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:24:52.213 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:24:52.213 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:24:52.213 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:24:52.213 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:24:52.213 mke2fs 1.47.0 (5-Feb-2023) 00:24:52.213 Discarding device blocks: 0/4096 done 00:24:52.213 Creating filesystem with 4096 1k blocks and 1024 inodes 00:24:52.213 00:24:52.213 Allocating group tables: 0/1 done 00:24:52.213 Writing inode tables: 0/1 done 00:24:52.213 Creating journal (1024 blocks): done 00:24:52.213 Writing superblocks and filesystem accounting information: 0/1 done 00:24:52.213 00:24:52.213 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:52.213 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:52.213 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:52.213 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:52.213 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:52.214 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:52.214 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90802 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90802 ']' 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90802 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.479 04:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90802 00:24:52.479 killing process with pid 90802 00:24:52.479 04:46:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:52.480 04:46:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:52.480 04:46:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90802' 00:24:52.480 04:46:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90802 00:24:52.480 04:46:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90802 00:24:53.857 04:46:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:24:53.857 00:24:53.857 real 0m6.777s 00:24:53.858 user 0m9.830s 00:24:53.858 sys 0m1.407s 00:24:53.858 04:46:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.858 ************************************ 00:24:53.858 04:46:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:53.858 END TEST bdev_nbd 00:24:53.858 ************************************ 00:24:53.858 04:46:41 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:24:53.858 04:46:41 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:24:53.858 04:46:41 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:24:53.858 04:46:41 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:24:53.858 04:46:41 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.858 04:46:41 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.858 04:46:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:53.858 ************************************ 00:24:53.858 START TEST bdev_fio 00:24:53.858 ************************************ 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:24:53.858 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:24:53.858 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:54.117 ************************************ 00:24:54.117 START TEST bdev_fio_rw_verify 00:24:54.117 ************************************ 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:54.117 04:46:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:54.376 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:54.376 fio-3.35 00:24:54.376 Starting 1 thread 00:25:06.607 00:25:06.607 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91010: Wed Nov 27 04:46:52 2024 00:25:06.607 read: IOPS=8345, BW=32.6MiB/s (34.2MB/s)(326MiB/10001msec) 00:25:06.607 slat (nsec): min=23652, max=97100, avg=30314.21, stdev=6394.53 00:25:06.607 clat (usec): min=12, max=504, avg=191.07, stdev=73.13 00:25:06.607 lat (usec): min=38, max=567, avg=221.38, stdev=74.04 00:25:06.607 clat percentiles (usec): 00:25:06.607 | 50.000th=[ 190], 99.000th=[ 338], 99.900th=[ 388], 99.990th=[ 437], 00:25:06.607 | 99.999th=[ 506] 00:25:06.607 write: IOPS=8742, BW=34.1MiB/s (35.8MB/s)(337MiB/9871msec); 0 zone resets 00:25:06.607 slat (usec): min=11, max=208, avg=23.57, stdev= 6.49 00:25:06.607 clat (usec): min=77, max=1271, avg=438.62, stdev=61.36 00:25:06.607 lat (usec): min=97, max=1480, avg=462.19, stdev=63.05 00:25:06.607 clat percentiles (usec): 00:25:06.607 | 50.000th=[ 441], 99.000th=[ 586], 99.900th=[ 725], 99.990th=[ 979], 00:25:06.607 | 99.999th=[ 1270] 00:25:06.607 bw ( KiB/s): min=33720, max=37528, per=99.18%, avg=34684.63, stdev=1087.79, samples=19 00:25:06.607 iops : min= 8430, max= 9382, avg=8671.16, stdev=271.95, samples=19 00:25:06.607 lat (usec) : 20=0.01%, 100=6.25%, 250=30.43%, 500=56.14%, 750=7.14% 00:25:06.607 lat (usec) : 1000=0.03% 00:25:06.607 lat (msec) : 2=0.01% 00:25:06.607 cpu : usr=98.74%, sys=0.48%, ctx=19, majf=0, minf=7263 00:25:06.607 IO depths : 1=7.8%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:06.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.607 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.607 issued rwts: total=83463,86296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.607 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:06.607 00:25:06.607 Run status group 0 (all jobs): 00:25:06.607 READ: bw=32.6MiB/s (34.2MB/s), 32.6MiB/s-32.6MiB/s (34.2MB/s-34.2MB/s), io=326MiB (342MB), run=10001-10001msec 00:25:06.607 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=337MiB (353MB), run=9871-9871msec 00:25:06.866 ----------------------------------------------------- 00:25:06.866 Suppressions used: 00:25:06.866 count bytes template 00:25:06.866 1 7 /usr/src/fio/parse.c 00:25:06.866 252 24192 /usr/src/fio/iolog.c 00:25:06.866 1 8 libtcmalloc_minimal.so 00:25:06.866 1 904 libcrypto.so 00:25:06.866 ----------------------------------------------------- 00:25:06.866 00:25:06.866 00:25:06.866 real 0m12.806s 00:25:06.866 user 0m13.294s 00:25:06.866 sys 0m0.738s 00:25:06.866 ************************************ 00:25:06.866 END TEST bdev_fio_rw_verify 00:25:06.866 ************************************ 00:25:06.866 04:46:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.866 04:46:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:25:06.866 04:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:25:06.866 04:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:06.866 04:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:25:06.866 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:06.866 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:25:06.866 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:25:06.866 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "cc61d28c-909d-4232-ad27-e3763e0b3bee"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cc61d28c-909d-4232-ad27-e3763e0b3bee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "cc61d28c-909d-4232-ad27-e3763e0b3bee",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "3aad7ae0-1268-40a0-b55b-0ad11666c4c7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e6a247f2-42b9-4c53-9f58-98bf30969ec5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c2585965-0c32-48ac-b9d8-b879c4a78e2b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:06.867 /home/vagrant/spdk_repo/spdk 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:25:06.867 00:25:06.867 real 0m13.019s 00:25:06.867 user 0m13.402s 00:25:06.867 sys 0m0.821s 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.867 04:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:25:06.867 ************************************ 00:25:06.867 END TEST bdev_fio 00:25:06.867 ************************************ 00:25:07.126 04:46:54 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:07.126 04:46:54 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:07.126 04:46:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:25:07.126 04:46:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.126 04:46:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:07.126 ************************************ 00:25:07.126 START TEST bdev_verify 00:25:07.126 ************************************ 00:25:07.126 04:46:54 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:07.126 [2024-11-27 04:46:54.612044] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:07.126 [2024-11-27 04:46:54.612221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91173 ] 00:25:07.391 [2024-11-27 04:46:54.785609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:07.391 [2024-11-27 04:46:54.919057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.391 [2024-11-27 04:46:54.919060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.988 Running I/O for 5 seconds... 00:25:09.888 11070.00 IOPS, 43.24 MiB/s [2024-11-27T04:46:58.882Z] 10507.00 IOPS, 41.04 MiB/s [2024-11-27T04:46:59.821Z] 10394.67 IOPS, 40.60 MiB/s [2024-11-27T04:47:00.754Z] 11099.25 IOPS, 43.36 MiB/s [2024-11-27T04:47:00.754Z] 11523.60 IOPS, 45.01 MiB/s 00:25:13.131 Latency(us) 00:25:13.131 [2024-11-27T04:47:00.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.131 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:13.131 Verification LBA range: start 0x0 length 0x2000 00:25:13.131 raid5f : 5.02 5767.05 22.53 0.00 0.00 33344.81 264.38 27525.12 00:25:13.131 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:13.131 Verification LBA range: start 0x2000 length 0x2000 00:25:13.131 raid5f : 5.02 5748.26 22.45 0.00 0.00 33406.41 323.96 27525.12 00:25:13.131 [2024-11-27T04:47:00.754Z] =================================================================================================================== 00:25:13.131 [2024-11-27T04:47:00.754Z] Total : 11515.31 44.98 0.00 0.00 33375.55 264.38 27525.12 00:25:14.505 00:25:14.505 real 0m7.283s 00:25:14.505 user 0m13.382s 00:25:14.505 sys 0m0.300s 00:25:14.505 04:47:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.505 ************************************ 00:25:14.505 END TEST bdev_verify 00:25:14.505 04:47:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:25:14.505 ************************************ 00:25:14.505 04:47:01 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:14.505 04:47:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:25:14.505 04:47:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.505 04:47:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:14.505 ************************************ 00:25:14.505 START TEST bdev_verify_big_io 00:25:14.505 ************************************ 00:25:14.505 04:47:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:14.505 [2024-11-27 04:47:01.947656] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:14.505 [2024-11-27 04:47:01.947871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91266 ] 00:25:14.764 [2024-11-27 04:47:02.133194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:14.764 [2024-11-27 04:47:02.261842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.764 [2024-11-27 04:47:02.261856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.349 Running I/O for 5 seconds... 00:25:17.661 506.00 IOPS, 31.62 MiB/s [2024-11-27T04:47:06.219Z] 696.00 IOPS, 43.50 MiB/s [2024-11-27T04:47:07.155Z] 760.00 IOPS, 47.50 MiB/s [2024-11-27T04:47:08.091Z] 760.50 IOPS, 47.53 MiB/s [2024-11-27T04:47:08.091Z] 761.60 IOPS, 47.60 MiB/s 00:25:20.468 Latency(us) 00:25:20.468 [2024-11-27T04:47:08.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.468 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:20.468 Verification LBA range: start 0x0 length 0x200 00:25:20.468 raid5f : 5.23 376.14 23.51 0.00 0.00 8334225.84 197.35 364141.85 00:25:20.468 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:20.468 Verification LBA range: start 0x200 length 0x200 00:25:20.468 raid5f : 5.25 374.52 23.41 0.00 0.00 8387683.76 236.45 362235.35 00:25:20.468 [2024-11-27T04:47:08.091Z] =================================================================================================================== 00:25:20.468 [2024-11-27T04:47:08.091Z] Total : 750.66 46.92 0.00 0.00 8360954.80 197.35 364141.85 00:25:21.844 00:25:21.844 real 0m7.538s 00:25:21.844 user 0m13.871s 00:25:21.844 sys 0m0.314s 00:25:21.844 04:47:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.844 ************************************ 00:25:21.844 END TEST bdev_verify_big_io 00:25:21.844 ************************************ 00:25:21.844 04:47:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:25:21.844 04:47:09 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:21.844 04:47:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:21.844 04:47:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.844 04:47:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:21.844 ************************************ 00:25:21.844 START TEST bdev_write_zeroes 00:25:21.844 ************************************ 00:25:21.844 04:47:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:22.102 [2024-11-27 04:47:09.540229] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:22.102 [2024-11-27 04:47:09.540427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91368 ] 00:25:22.360 [2024-11-27 04:47:09.724619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.360 [2024-11-27 04:47:09.853989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.928 Running I/O for 1 seconds... 00:25:23.864 20175.00 IOPS, 78.81 MiB/s 00:25:23.864 Latency(us) 00:25:23.864 [2024-11-27T04:47:11.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.864 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:23.864 raid5f : 1.01 20143.48 78.69 0.00 0.00 6328.85 1951.19 8877.15 00:25:23.864 [2024-11-27T04:47:11.487Z] =================================================================================================================== 00:25:23.864 [2024-11-27T04:47:11.487Z] Total : 20143.48 78.69 0.00 0.00 6328.85 1951.19 8877.15 00:25:25.241 00:25:25.241 real 0m3.250s 00:25:25.241 user 0m2.819s 00:25:25.241 sys 0m0.296s 00:25:25.241 ************************************ 00:25:25.241 END TEST bdev_write_zeroes 00:25:25.241 ************************************ 00:25:25.241 04:47:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.241 04:47:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:25:25.241 04:47:12 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:25.241 04:47:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:25.241 04:47:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.241 04:47:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:25.241 ************************************ 00:25:25.241 START TEST bdev_json_nonenclosed 00:25:25.241 ************************************ 00:25:25.241 04:47:12 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:25.241 [2024-11-27 04:47:12.836539] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:25.241 [2024-11-27 04:47:12.837243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91420 ] 00:25:25.499 [2024-11-27 04:47:13.021096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.758 [2024-11-27 04:47:13.151131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.758 [2024-11-27 04:47:13.151303] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:25.758 [2024-11-27 04:47:13.151344] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:25.758 [2024-11-27 04:47:13.151359] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:26.016 00:25:26.016 real 0m0.681s 00:25:26.016 user 0m0.425s 00:25:26.016 sys 0m0.150s 00:25:26.016 04:47:13 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.016 04:47:13 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:25:26.016 ************************************ 00:25:26.016 END TEST bdev_json_nonenclosed 00:25:26.016 ************************************ 00:25:26.016 04:47:13 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:26.016 04:47:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:26.016 04:47:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.016 04:47:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:26.016 ************************************ 00:25:26.016 START TEST bdev_json_nonarray 00:25:26.016 ************************************ 00:25:26.016 04:47:13 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:26.016 [2024-11-27 04:47:13.598753] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:26.016 [2024-11-27 04:47:13.599083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91447 ] 00:25:26.274 [2024-11-27 04:47:13.793959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.533 [2024-11-27 04:47:13.922802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.533 [2024-11-27 04:47:13.923171] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:26.533 [2024-11-27 04:47:13.923290] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:26.533 [2024-11-27 04:47:13.923412] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:26.791 00:25:26.791 real 0m0.718s 00:25:26.791 user 0m0.452s 00:25:26.791 sys 0m0.160s 00:25:26.791 04:47:14 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.791 04:47:14 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:25:26.791 ************************************ 00:25:26.791 END TEST bdev_json_nonarray 00:25:26.791 ************************************ 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:25:26.791 04:47:14 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:25:26.791 00:25:26.791 real 0m49.264s 00:25:26.791 user 1m7.685s 00:25:26.791 sys 0m5.226s 00:25:26.791 04:47:14 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.791 04:47:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:26.791 ************************************ 00:25:26.791 END TEST blockdev_raid5f 00:25:26.791 ************************************ 00:25:26.791 04:47:14 -- spdk/autotest.sh@194 -- # uname -s 00:25:26.791 04:47:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:25:26.791 04:47:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:26.791 04:47:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:26.791 04:47:14 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:25:26.791 04:47:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:25:26.791 04:47:14 -- spdk/autotest.sh@260 -- # timing_exit lib 00:25:26.791 04:47:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.791 04:47:14 -- common/autotest_common.sh@10 -- # set +x 00:25:26.791 04:47:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:25:26.791 04:47:14 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:25:26.791 04:47:14 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:26.792 04:47:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:26.792 04:47:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:26.792 04:47:14 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:26.792 04:47:14 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:26.792 04:47:14 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:26.792 04:47:14 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:26.792 04:47:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.792 04:47:14 -- common/autotest_common.sh@10 -- # set +x 00:25:26.792 04:47:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:26.792 04:47:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:26.792 04:47:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:26.792 04:47:14 -- common/autotest_common.sh@10 -- # set +x 00:25:28.697 INFO: APP EXITING 00:25:28.697 INFO: killing all VMs 00:25:28.697 INFO: killing vhost app 00:25:28.697 INFO: EXIT DONE 00:25:28.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:28.956 Waiting for block devices as requested 00:25:28.956 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:28.956 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:29.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:29.893 Cleaning 00:25:29.893 Removing: /var/run/dpdk/spdk0/config 00:25:29.893 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:29.893 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:29.893 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:29.893 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:29.893 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:29.893 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:29.893 Removing: /dev/shm/spdk_tgt_trace.pid56969 00:25:29.893 Removing: /var/run/dpdk/spdk0 00:25:29.893 Removing: /var/run/dpdk/spdk_pid56740 00:25:29.893 Removing: /var/run/dpdk/spdk_pid56969 00:25:29.893 Removing: /var/run/dpdk/spdk_pid57204 00:25:29.893 Removing: /var/run/dpdk/spdk_pid57308 00:25:29.893 Removing: /var/run/dpdk/spdk_pid57364 00:25:29.893 Removing: /var/run/dpdk/spdk_pid57492 00:25:29.893 Removing: /var/run/dpdk/spdk_pid57515 00:25:29.893 Removing: /var/run/dpdk/spdk_pid57720 00:25:29.893 Removing: /var/run/dpdk/spdk_pid57837 00:25:29.893 Removing: /var/run/dpdk/spdk_pid57944 00:25:29.893 Removing: /var/run/dpdk/spdk_pid58066 00:25:29.893 Removing: /var/run/dpdk/spdk_pid58174 00:25:29.893 Removing: /var/run/dpdk/spdk_pid58218 00:25:29.893 Removing: /var/run/dpdk/spdk_pid58250 00:25:29.893 Removing: /var/run/dpdk/spdk_pid58326 00:25:29.893 Removing: /var/run/dpdk/spdk_pid58432 00:25:29.893 Removing: /var/run/dpdk/spdk_pid58910 00:25:29.893 Removing: /var/run/dpdk/spdk_pid58987 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59061 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59077 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59225 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59247 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59395 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59414 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59483 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59507 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59571 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59589 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59784 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59826 00:25:29.893 Removing: /var/run/dpdk/spdk_pid59915 00:25:29.893 Removing: /var/run/dpdk/spdk_pid61285 00:25:29.893 Removing: /var/run/dpdk/spdk_pid61502 00:25:29.893 Removing: /var/run/dpdk/spdk_pid61642 00:25:29.893 Removing: /var/run/dpdk/spdk_pid62302 00:25:29.893 Removing: /var/run/dpdk/spdk_pid62513 00:25:29.893 Removing: /var/run/dpdk/spdk_pid62659 00:25:29.893 Removing: /var/run/dpdk/spdk_pid63313 00:25:29.893 Removing: /var/run/dpdk/spdk_pid63649 00:25:29.893 Removing: /var/run/dpdk/spdk_pid63795 00:25:29.893 Removing: /var/run/dpdk/spdk_pid65207 00:25:29.893 Removing: /var/run/dpdk/spdk_pid65466 00:25:29.893 Removing: /var/run/dpdk/spdk_pid65606 00:25:29.893 Removing: /var/run/dpdk/spdk_pid67019 00:25:29.893 Removing: /var/run/dpdk/spdk_pid67283 00:25:29.893 Removing: /var/run/dpdk/spdk_pid67423 00:25:29.893 Removing: /var/run/dpdk/spdk_pid68837 00:25:29.893 Removing: /var/run/dpdk/spdk_pid69294 00:25:29.893 Removing: /var/run/dpdk/spdk_pid69434 00:25:29.893 Removing: /var/run/dpdk/spdk_pid70948 00:25:29.893 Removing: /var/run/dpdk/spdk_pid71207 00:25:29.893 Removing: /var/run/dpdk/spdk_pid71358 00:25:29.893 Removing: /var/run/dpdk/spdk_pid72870 00:25:29.893 Removing: /var/run/dpdk/spdk_pid73140 00:25:29.893 Removing: /var/run/dpdk/spdk_pid73286 00:25:29.893 Removing: /var/run/dpdk/spdk_pid74794 00:25:29.893 Removing: /var/run/dpdk/spdk_pid75292 00:25:29.893 Removing: /var/run/dpdk/spdk_pid75438 00:25:29.893 Removing: /var/run/dpdk/spdk_pid75587 00:25:29.893 Removing: /var/run/dpdk/spdk_pid76034 00:25:29.893 Removing: /var/run/dpdk/spdk_pid76804 00:25:29.893 Removing: /var/run/dpdk/spdk_pid77210 00:25:29.893 Removing: /var/run/dpdk/spdk_pid77930 00:25:29.893 Removing: /var/run/dpdk/spdk_pid78412 00:25:29.893 Removing: /var/run/dpdk/spdk_pid79218 00:25:29.893 Removing: /var/run/dpdk/spdk_pid79634 00:25:29.893 Removing: /var/run/dpdk/spdk_pid81643 00:25:29.893 Removing: /var/run/dpdk/spdk_pid82093 00:25:29.893 Removing: /var/run/dpdk/spdk_pid82543 00:25:30.152 Removing: /var/run/dpdk/spdk_pid84669 00:25:30.152 Removing: /var/run/dpdk/spdk_pid85160 00:25:30.152 Removing: /var/run/dpdk/spdk_pid85670 00:25:30.152 Removing: /var/run/dpdk/spdk_pid86740 00:25:30.152 Removing: /var/run/dpdk/spdk_pid87074 00:25:30.152 Removing: /var/run/dpdk/spdk_pid88043 00:25:30.152 Removing: /var/run/dpdk/spdk_pid88379 00:25:30.152 Removing: /var/run/dpdk/spdk_pid89337 00:25:30.152 Removing: /var/run/dpdk/spdk_pid89671 00:25:30.152 Removing: /var/run/dpdk/spdk_pid90352 00:25:30.152 Removing: /var/run/dpdk/spdk_pid90632 00:25:30.152 Removing: /var/run/dpdk/spdk_pid90694 00:25:30.152 Removing: /var/run/dpdk/spdk_pid90742 00:25:30.152 Removing: /var/run/dpdk/spdk_pid90995 00:25:30.152 Removing: /var/run/dpdk/spdk_pid91173 00:25:30.152 Removing: /var/run/dpdk/spdk_pid91266 00:25:30.152 Removing: /var/run/dpdk/spdk_pid91368 00:25:30.152 Removing: /var/run/dpdk/spdk_pid91420 00:25:30.152 Removing: /var/run/dpdk/spdk_pid91447 00:25:30.152 Clean 00:25:30.152 04:47:17 -- common/autotest_common.sh@1453 -- # return 0 00:25:30.152 04:47:17 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:30.152 04:47:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.152 04:47:17 -- common/autotest_common.sh@10 -- # set +x 00:25:30.152 04:47:17 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:30.152 04:47:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.152 04:47:17 -- common/autotest_common.sh@10 -- # set +x 00:25:30.152 04:47:17 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:30.152 04:47:17 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:30.152 04:47:17 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:30.152 04:47:17 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:30.152 04:47:17 -- spdk/autotest.sh@398 -- # hostname 00:25:30.152 04:47:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:30.410 geninfo: WARNING: invalid characters removed from testname! 00:25:56.948 04:47:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:58.324 04:47:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:00.856 04:47:48 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:04.139 04:47:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:06.706 04:47:53 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:09.237 04:47:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:11.767 04:47:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:11.767 04:47:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:26:11.767 04:47:58 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:26:11.767 04:47:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:11.767 04:47:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:11.767 04:47:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:11.767 + [[ -n 5260 ]] 00:26:11.767 + sudo kill 5260 00:26:11.775 [Pipeline] } 00:26:11.789 [Pipeline] // timeout 00:26:11.794 [Pipeline] } 00:26:11.804 [Pipeline] // stage 00:26:11.808 [Pipeline] } 00:26:11.818 [Pipeline] // catchError 00:26:11.824 [Pipeline] stage 00:26:11.826 [Pipeline] { (Stop VM) 00:26:11.837 [Pipeline] sh 00:26:12.149 + vagrant halt 00:26:15.454 ==> default: Halting domain... 00:26:22.030 [Pipeline] sh 00:26:22.311 + vagrant destroy -f 00:26:25.637 ==> default: Removing domain... 00:26:25.649 [Pipeline] sh 00:26:25.929 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:26:25.939 [Pipeline] } 00:26:25.954 [Pipeline] // stage 00:26:25.960 [Pipeline] } 00:26:25.974 [Pipeline] // dir 00:26:25.979 [Pipeline] } 00:26:25.993 [Pipeline] // wrap 00:26:26.000 [Pipeline] } 00:26:26.013 [Pipeline] // catchError 00:26:26.022 [Pipeline] stage 00:26:26.025 [Pipeline] { (Epilogue) 00:26:26.038 [Pipeline] sh 00:26:26.319 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:32.895 [Pipeline] catchError 00:26:32.897 [Pipeline] { 00:26:32.911 [Pipeline] sh 00:26:33.193 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:33.452 Artifacts sizes are good 00:26:33.462 [Pipeline] } 00:26:33.479 [Pipeline] // catchError 00:26:33.492 [Pipeline] archiveArtifacts 00:26:33.500 Archiving artifacts 00:26:33.626 [Pipeline] cleanWs 00:26:33.638 [WS-CLEANUP] Deleting project workspace... 00:26:33.638 [WS-CLEANUP] Deferred wipeout is used... 00:26:33.644 [WS-CLEANUP] done 00:26:33.647 [Pipeline] } 00:26:33.663 [Pipeline] // stage 00:26:33.668 [Pipeline] } 00:26:33.681 [Pipeline] // node 00:26:33.687 [Pipeline] End of Pipeline 00:26:33.726 Finished: SUCCESS